Jenkins is, by far, the most used CI/CD tool in the market. That comes as no surprise since it’s been around for a while, it has one of the biggest open source communities, it has enterprise version for those who need it, and it is straightforward to extend it to suit (almost) anyone’s needs.
Products that dominate the market for years tend to be stable and very feature rich. Jenkins is no exception. However, with age come some downsides as well. In the case of Jenkins, automation setup is one of the things that has a lot to be desired. If you need Jenkins to serve as an orchestrator of your automation and tasks, you’ll find it to be effortless to use. But, if you need to automate Jenkins itself, you’ll realize that it is not as smooth as you’d expect from modern tools. Never the less, Jenkins setup can be automated, and we’ll go through one possible solution.
Docker 1.13 introduced a set of features that allow us to centrally manage secrets and pass them only to services that need them. They provide a much-needed mechanism to provide information that should be hidden from anyone except designated services.
A secret (at least from Docker’s point of view) is a blog of data. A typical use case would be a certificate, SSH private keys, passwords, and so on. Secrets should stay secret meaning that they should not be stored unencrypted or transmitted over a network.
I am continuously getting questions about blue-green releases inside a Docker Swarm cluster. Viktor, in your The DevOps 2.0 Toolkit book you told us to use blue-green deployment. How do we do it with services running inside a Swarm cluster? My answer is usually something along the following lines. With the old Swarm, blue-green releases were easier than rolling updates (neither were supported out of the box). Now we got rolling updates. Use them! The reaction to that is often that we still want blue-green releases.
This post is my brainstorming on this subject. I did not write it as a result of some deep thinking. There is no great wisdom in it. I just wrote what was passing through my mind while I was trying to answer another one of the emails containing blue-green deployment questions. What follows might not make much sense. Don’t be harsh on me.
Bells are ringing! Docker v1.13 is out!
The most common question I receive during my Docker-related talks and workshops is usually related to Swarm and Compose.
Someone: How can I use Docker Compose with Docker Swarm?
Me: You can’t! You can convert your Compose files into a Bundle that does not support all Swarm features. If you want to use Swarm to its fullest, be prepared for
docker service create commands that contain a never ending list of arguments.
I published The DevOps 2.1 Toolkit: Docker Swarm on LeanPub early. I think that only around 20% was written when it went public. That allowed you to get early access to the material, and me to get your feedback. The result is fantastic. Many send me their notes, reported bugs, proposed suggestions for improvements, recommended tools and processes that should be explored, and so on.
Swarm Mode is taking Docker scheduling by the storm. You tried it in your lab or by setting up a few VMs locally and joining them into a Swarm cluster. Now it’s time to move it to the next level and create a cluster in AWS. Which tools should you use to do that? How should you set up your AWS Swarm cluster?
If you are like me, you already discarded the option to create servers manually through AWS Console, enter each with SSH, and run commands that will install Docker Engine and join the node with others. You know better than that and want a reliable and automated process that will create and maintain the cluster.
Among a myriad of options, we can choose from to create a Docker Swarm cluster in AWS, three stick from the crowd. We can use Docker Machine with AWS CLI, Docker for AWS as CloudFormation template, and Packer with Terraform. That is, by no means, the final list of the tools we can use. The time is limited and I promised to myself that this article will be shorter than War and Peace so I had to draw the line somewhere. Those three combinations are, in my opinion, the best candidates as your tools of choice. Even if you do choose something else, this article, hopefully, gave you an insight into the direction you might want to take.
In the Forwarding Logs From All Containers Running Anywhere Inside A Docker Swarm Cluster article, we managed to add centralized logging to our cluster. Logs from any container running inside any of the nodes are shipped to a central location. They are stored in ElasticSearch and available through Kibana. However, the fact that we have easy access to all the logs does not mean that we have all the information we would need to debug a problem or prevent it from happening in the first place. We need to complement our logs with the rest of the information about the system. We need much more than what logs alone can provide.