Bells are ringing! Docker v1.13 is out!
The most common question I receive during my Docker-related talks and workshops is usually related to Swarm and Compose.
Someone: How can I use Docker Compose with Docker Swarm?
Me: You can’t! You can convert your Compose files into a Bundle that does not support all Swarm features. If you want to use Swarm to its fullest, be prepared for
docker service create commands that contain a never ending list of arguments.
Such an answer was usually followed with disappointment. Docker Compose showed us the advantages of specifying everything in a YAML file as opposed to trying to remember all the arguments we have to pass to
docker commands. It allowed us to store service definitions in a repository thus providing a reproducible and well-documented process for managing them. Docker Compose replaced bash scripts, and we loved it. Then, Docker v1.12 came along and put a difficult choice in front of us. Should we adopt Swarm and discard Compose? Since summer 2016, Swarm and Compose were not in love anymore. It was a painful divorce.
But, after almost half a year of separation, they are back together, and we can witness their second honeymoon. Kind of… We do not need Docker Compose binary for Swarm services, but we can use its YAML files.
Docker Engine v1.13 introduced support for Compose YAML files within the
stack command. At the same time, Docker Compose v1.10 introduced a new version 3 of its format. Together, they allow us to manage our Swarm services using already familiar Docker Compose YAML format.
I will assume you are already familiar with Docker Compose and won’t go into details of everything we can do with it. Instead, we’ll go through an example of creating a few Swarm services.
We’ll explore how to create Docker Flow Proxy service through Docker Compose files and the
docker stack deploy command.
The examples that follow assume that you are using Docker v1.13+, Docker Compose v1.10+, and Docker Machine v0.9+.
If you are a Windows user, please run all the examples from Git Bash (installed through Docker Toolbox). Also, make sure that your Git client is configured to check out the code AS-IS. Otherwise, Windows might change carriage returns to the Windows format.
Swarm Cluster Setup
To setup an example Swarm cluster using Docker Machine, please run the commands that follow.
Feel free to skip this section if you already have a working Swarm cluster.
curl -o swarm-cluster.sh \ https://raw.githubusercontent.com/vfarcic/docker-flow-proxy/master/scripts/swarm-cluster.sh chmod +x swarm-cluster.sh ./swarm-cluster.sh docker-machine ssh node-1
Now we’re ready to deploy the
Creating Swarm Services Through Docker Stack Commands
We’ll start by creating a network.
docker network create --driver overlay proxy
The proxy network will be dedicated to the proxy container and services that will be attached to it.
We’ll use docker-compose-stack.yml from the vfarcic/docker-flow-proxy repository to create
The content of the
docker-compose-stack.yml file is as follows.
version: "3" services: proxy: image: vfarcic/docker-flow-proxy ports: - 80:80 - 443:443 networks: - proxy environment: - LISTENER_ADDRESS=swarm-listener - MODE=swarm deploy: replicas: 2 swarm-listener: image: vfarcic/docker-flow-swarm-listener networks: - proxy volumes: - /var/run/docker.sock:/var/run/docker.sock environment: - DF_NOTIFY_CREATE_SERVICE_URL=http://proxy:8080/v1/docker-flow-proxy/reconfigure - DF_NOTIFY_REMOVE_SERVICE_URL=http://proxy:8080/v1/docker-flow-proxy/remove deploy: placement: constraints: [node.role == manager] networks: proxy: external: true
The format is written in version
3 (mandatory for
docker stack deploy).
It contains two services;
swarm-listener. Since this article is not meant to teach you how to use the proxy, I won’t go into the meaning of each argument.
When compared with previous Compose versions, most of the new arguments are defined within
deploy. You can think of that section as a placeholder for Swarm-specific arguments. In this case, we are specifying that the
proxy service should have two replicas while the
swarm-listener service should be constrained to manager roles. Everything else defined for those two services is using the same format as in earlier Compose versions.
At the bottom of the YAML file is the list of networks which are referenced within
services. If a service does not specify any, the
default network will be created automatically. In this case, we opted for manual creation of a network since services from other stacks should be able to communicate with the proxy. Therefore, we created a network manually and defined it as
external in the YAML file.
Let’s create the stack based on the YAML file we explored.
curl -o docker-compose-stack.yml \ https://raw.githubusercontent.com/vfarcic/docker-flow-proxy/master/docker-compose-stack.yml docker stack deploy -c docker-compose-stack.yml proxy
The first command downloaded the Compose file docker-compose-stack.yml from the vfarcic/docker-flow-proxy repository. The second command created the services that form the stack.
The tasks of the stack can be seen through the
stack ps command.
docker stack ps proxy
The output is as follows (IDs are removed for brevity).
NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS proxy_proxy.1 vfarcic/docker-flow-proxy:latest node-2 Running Running 2 minutes ago proxy_swarm-listener.1 vfarcic/docker-flow-swarm-listener:latest node-1 Running Running 2 minutes ago proxy_proxy.2 vfarcic/docker-flow-proxy:latest node-3 Running Running 2 minutes ago
We are running two replicas of the
proxy (for high-availability in the case of a failure) and one of the
Deploying More Stacks
Let’s deploy another stack.
This time we’ll use Docker stack defined in the Compose file docker-compose-stack.yml located in the vfarcic/go-demo repository. It is as follows.
version: '3' services: main: image: vfarcic/go-demo environment: - DB=db networks: - proxy - default deploy: replicas: 3 labels: - com.df.notify=true - com.df.distribute=true - com.df.servicePath=/demo - com.df.port=8080 db: image: mongo networks: - default networks: default: external: false proxy: external: true
The stack defines two services (
db). They will communicate with each other through the
default network that will be created automatically by the stack (no need for
docker network create command). Since the
main service is an API, it should be accessible through the proxy, so we’re attaching
proxy network as well.
The important thing to note is that we used the
deploy section to define Swarm-specific arguments. In this case, the
main service defines that there should be three replicas and a few labels. As with the previous stack, we won’t go into details of each service. If you’d like to go into more depth of the labels used with the
main service, please visit the Running Docker Flow Proxy In Swarm Mode With Automatic Reconfiguration tutorial.
Let’s deploy the stack.
curl -o docker-compose-go-demo.yml \ https://raw.githubusercontent.com/vfarcic/go-demo/master/docker-compose-stack.yml docker stack deploy \ -c docker-compose-go-demo.yml go-demo docker stack ps go-demo
We downloaded the stack definition, executed
stack deploy command that created the services and run the
stack ps command that lists the tasks that belong to the
go-demo stack. The output is as follows (IDs are removed for brevity).
NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS go-demo_main.1 vfarcic/go-demo:latest node-2 Running Running 7 seconds ago ... go-demo_db.1 mongo:latest node-2 Running Running 21 seconds ago go-demo_main.2 vfarcic/go-demo:latest node-2 Running Running 19 seconds ago ... go-demo_main.3 vfarcic/go-demo:latest node-2 Running Running 20 seconds ago ...
Since Mongo database is much bigger than the
main service, it takes more time to pull it, resulting in a few failures. The
go-demo service is designed to fail if it cannot connect to its database. Once the
db service is running, the
main service should stop failing, and we’ll see three replicas with the current state
After a few moments, the
swarm-listener service will detect the
main service from the
go-demo stack and send the
proxy a request to reconfigure itself. We can see the result by sending an HTTP request to the proxy.
curl -i "localhost/demo/hello"
The output is as follows.
HTTP/1.1 200 OK Date: Thu, 19 Jan 2017 23:57:05 GMT Content-Length: 14 Content-Type: text/plain; charset=utf-8 hello, world!
The proxy was reconfigured and forwards all requests with the base path
/demo to the
main service from the
For more advanced usage of the proxy, please see the examples from Running Docker Flow Proxy In Swarm Mode With Automatic Reconfiguration tutorial or consult the configuration and usage documentation.
To Stack Or Not To Stack
Docker stack is a great addition to the Swarm Mode. We do not have to deal with
docker service create commands that tend to have a never ending list of arguments. With services specified in Compose YAML files, we can replace those long commands with a simple
docker stack deploy. If those YAML files are stored in code repositories, we can apply the same practices to service deployments as to any other area of software engineering. We can track changes, do code reviews, share with others, and so on.
The addition of the Docker
stack command and its ability to use Compose files is a very welcome addition to the Docker ecosystem.
Please remove Docker Machine VMs we created. You might need those resources for some other tasks.
exit docker-machine rm -f node-1 node-2 node-3
The DevOps 2.1 Toolkit: Docker Swarm
If you liked this article, you might be interested in The DevOps 2.1 Toolkit: Docker Swarm book. Unlike the previous title in the series (The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices) that provided a general overlook of some of the latest DevOps practices and tools, this book is dedicated entirely to Docker Swarm and the processes and tools we might need to build, test, deploy, and monitor services running inside a cluster.
You can get a copy from Amazon.com (and the other worldwide sites) or LeanPub. It is also available as The DevOps Toolkit Series bundle.
Give the book a try and let me know what you think.
This is for single host am I right? What it’s look like for multiple DigitalOcean’s droplets? I just can’t find any document for that.
This is not for a single host.
docker stack deploywill send instructions to Docker Swarm Mode which will schedule services across the cluster.
Hey Vikor, This example work quite well at Jan but after Docker 1.13.1 I got
$ docker stack deploy -c docker-compose-go-demo.yml go-demo
Error response from daemon: network go-demo_default not found
$ docker stack ps proxy
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
lk2oyaqbddhg proxy_proxy.1 vfarcic/docker-flow-proxy:latest node-2 Running Running 51 minutes ago
s9r68fuhlpxr proxy_swarm-listener.1 vfarcic/docker-flow-swarm-listener:latest node-1 Running Running 52 minutes ago
l99v7d23i0pt proxy_proxy.2 vfarcic/docker-flow-proxy:latest node-3 Running Running 51 minutes ago
$ docker stack ps go-demo
Nothing found in stack: go-demo
And I also got “Docker Flow Proxy: 503 Service Unavailable” when curl -i “localhost/demo/hello”
Can you confirm this example is still work?
I just rerun the commands from this article and everything work.
I did notice a similar error in the past. I think it was somehow related to service discovery not being fast enough to pick it up. Those few times I just rerun it and it worked. Cannot guarantee that the problem you’re experiencing is the same 😦
If creating the stack did not work (you can see it with
docker stack ps go-demo), the services are not created and sending a request to them (
curl) will fail as well.
Can you try it one more time (maybe with a fresh Swarm cluster)?
Thanks for confirmation, I can make it work with another Mac now by just remove and clean everything. For fresh swarm, the first attempt will fail (like you said) but second attempt will succeed.
I’ll try again with other Mac to see the problems and will let you know. Thanks!
Hey Viktor, here’s summary what I found https://medium.com/@katopz/easy-docker-1-13-swarm-mode-stack-bed0d608ecdd#.70k02yyk5
Sorry for not getting to you earlier.
That’s a really good writeup. The problem is that I wasn’t able to reproduce it myself so I’m still in the dark as to what causes it nor how to fix it 😦
Thanks for the article , I applied it on my prototype based on docker 1.12 , quite easy to use this docker stack command. Some remarks:
– a service name is automatically created based on the stack name and the service defined in the yml, it lets the possibility to get the status of individual service via the command docker service ps go-demo_main. Some governance about stack name must be defined : if it’s a new stack name but with the same service description it will create separated services, if it’s the same stack name with same service , an update will done for each service.
– it simplify the deployment , I see now 3 main steps :
– network creation (perhaps it will be part of the yml compose in the future?)
– deploy of the stack
– curl command to update the proxy
Once you get used to working with stacks, you’ll probably use “docker service stack ps [STACK_NAME]” commands much more than “docker service ps…”. Going back to your question, you’re right. Services are names with a combination of the name of the stack and the name of the service separated with underscore.
I tend to create networks manually only when they span multiple stacks. I do that only because it’s more convenient. You could always, use network from one stack inside the other. For example, https://github.com/vfarcic/docker-flow-stacks/blob/master/logging/logging-df-proxy.yml stack will automatically create a network _default. If the stack name is logging, the network will be called logging_default. Now, if you take a look at https://github.com/vfarcic/docker-flow-stacks/blob/master/metrics/prometheus-grafana-df-proxy.yml, you’ll see that it uses the logging_default network created with the first stack.
The naming convention is the same as in the previous Compose versions. It’s _. In case of stacks, PROJECT_NAME is STACK_NAME and SOMETHING is the name of a service, network, volumes, and so on. In other words, whatever you specify is created with the name prefixed with the name of the stack and underscore.
There is no need for
curlcommand. Docker Flow Swarm Listener will update the proxy for you. Just make sure that services that should be configured in the proxy have the labels.
I agree with katopz. The examples & documentation around real production use don’t exist, e.g.
1. Managing different environments, and configuration-based compose.yml through environment variable substitution.
2. Real deployment scenarios in a modern microservices architecture: I want to deploy and update 1 thing in my stack while everything continues running happily.
3. In relation to #2, strategies and rollout policies to minimise or negate disruption.
4. Server roles and container placement.
5. Recommendations for configuration management, secret storage.
etc etc etc.
The individual pieces for these points do exist, I guess what I’m asking if for leaders of the community to provide more complete examples based upon real & battle hardened use.
I’m not sure I understand the challenge with that. Deploying one thing (one service) means that all you have to do it modify that service. Docker will make sure not to touch the rest if their specification is the same as what’s currently running.
I think that depends from one case to another. Not everyone shares the same architecture. I do not believe in rules that everyone should follow. Our job is to understand the tools we can use and make decisions based on individual use cases and circumstances. For example, strategy to deploy a microservice written in Go and using MongoDB is not the same as dealing with applications that use shared database.
I’m not sure what you mean by that.
What type of configuration management you have in mind? It depends on whether you fully adopted Docker or not. Whether you’re running in cloud or on-premise. And so on and so forth. Secrets storage was released but still not widely announced. You’ll hear from Docker about it soon. Please note that it is in its infancy. If you’d like something mature now, I’d recommend HashiCorp Vault.
If this sentence refers to Docker Stack, that would be very hard to do only a few days after it was released. Some of use played with it while in beta. Many of us will start using it now. If you are an early adopter, you are part of the community that is trying new things and, hopefully, contributing back. If you’re expecting battle hardened examples, I think you’ll have to give us (community) a bit of time. That comes after a few months of usage (if not more). Doesn’t it?
Maybe for the bullet #4, he meant the following…
constraints: [node.role == manager]
Do u know how to constraints to a specific docker host in your swarm?
I think it should be:
constraints: [node.id == ]
I might be wrong since I never use that one. I think that defining the node something should be deployed to defies the Swarm’s purpose. Even if I would want to put something on a specific single node, I would still use labels even if only one node matches them. That way, when the node goes down (please notice the I said when and not if), I can create a new one with the same label and let Swarm do the work of rescheduling it there. Like that, I don’t need to be in a rush to figure out why the node failed and risk more downtime.
I will try it and let you know.
Nice book by the way, i really enjoy it.
Can you use
docker stack deploy‘s ‘–compose-file’ option multiple times like you could with docker-compose -f ?
Yes. In that aspect, it works in a similar way as Docker Compose. It’s idempotent. You can run it as many times as you like and the result will always be the same.
How would you achieve volumes synchronization between swarm nodes, so that the service switch from one now to another are consistently working?
You should combine network drives with one of Docker volume plugins. My favorite is REX-Ray. If you don’t want to use plugins, a simple NFS directory mounted on all nodes should do.
Without a network drive, is there any recommended way?
I have tried the glusterfs, but fail to probe the nodes.
Sample: two VMs with two virtual disk each – one for OS and one for synchronization.
You can install NFS on one of those VMs.
Thanks. I will try it.
I have used the GlusterFS in combination with docker-local-persist-volume-plugin with success. Each swarm volume was located through plugin on specified directory on common file system and GlusterFS synchronize this directory between the nodes. I used common GlusterFS configuration with 2 mirrors in cluster i.e. both nodes contains whole directory data.
It worked great on two nodes in different geographical locations. Never had problem with GlusterFS.
Pingback: Docker: How To Get Started With Containers in Ubuntu – Sajjan's Blog
Pingback: Docker: How To Get Started With Containers in Ubuntu - Sajjan's Blog