Jenkins (forked from Hudson after a dispute with Oracle) has been around for a long time and established itself as the leading platform for the creation of continuous integration (CI) and continuous delivery/deployment (CD) pipelines. The idea behind it is that we should create jobs that perform certain operations like building, testing, deploying, and so on. Those jobs should be chained together to create a CI/CD pipeline. The success was so big that other products followed its lead and we got Bamboo, Team City, and others. They all used a similar logic of having jobs and chaining them together. Operations, maintenance, monitoring, and the creation of jobs is mostly done through their UIs. However, none of the other products managed to suppress Jenkins due to its strong community support. There are over one thousand plugins and one would have a hard time imagining a task that is not supported by, at least, one of them. The support, flexibility, and extensibility featured by Jenkins allowed it to maintain its reign as the most popular and widely used CI/CD tool throughout all this time. The approach based on heavy usage of UIs can be considered the first generation of CI/CD tools (even though there were others before).
With time, new products come into being and, with them, new approaches were born. Travis, CircleCI, and the like, moved the process to the cloud and based themselves on auto-discovery and, mostly YML, configurations that reside in the same repository as the code that should be moved through the pipeline. The idea was good and provided quite a refreshment. Instead of defining your jobs in a centralized location, those tools would inspect your code and act depending on the type of the project. If, for example, they find build.gradle file, they would assume that your project should be tested and built using Gradle. As the result, they would run
gradle check to test your code and, if tests passed, follow it by
gradle assemble to build the artifacts. We can consider those products to be the second generation of CI/CD tools.
The first and the second generation of tools suffer from different problems. Jenkins and the like feature power and flexibility that allow us to create custom tailored pipelines that can handle almost any level of complexity. This power comes with a price. When you have tens of jobs, their maintenance is quite easy. However, when that number increases to hundreds, managing them can become quite tedious and time demanding.
Let’s say that an average pipeline has five jobs (building, pre-deployment testing, deployment to a staging environment, post-deployment testing, and deployment to production). In reality, there are often more than five jobs but let’s keep it an optimistic estimate. If we multiple those jobs with, let’s say, twenty pipelines belonging to twenty different projects, the total number reaches one hundred. Now, imagine that we need to change all those jobs from, let’s say, Maven to Gradle. We can choose to start modifying them through the Jenkins UI or be brave and apply changes directly in Jenkins XML files that represent those jobs. Either way, this, seemingly simple change would require quite some dedication. Moreover, due to its nature, everything is centralized in one location making it hard for teams to manage jobs belonging to their own projects. Besides, project specific configurations and code belong to the same repository where the rest of application code resides and not in some central location. And Jenkins is not alone with this problem. Most of the other self-hosted tools have it as well. It comes from the era when heavy centralization and horizontal division of tasks was thought to be a good idea. At approximately the same time, we thought that UIs should solve most of the problems. Today, we know that many of the types of tasks are easier to define and maintain as code, than through some UI.
What I’m trying to say is that different approaches belong to different contexts and types of tasks. Jenkins, and similar tools, benefit greatly from their UIs for monitoring and visual representations of statuses. The part it fails with is the creation and maintenance of jobs. That type of tasks would be much better done through some kind of code. With Jenkins, we had the power but needed to pay the price for it in the form of maintenance effort.
The “second generation” CI/CD tools (Travis, CircleCI, and the like) reduced that maintenance problem to an almost negligible effort. In many cases, there is nothing to be done since they will discover the type of the project and “do the right thing”. In some other cases, we have to write a travis.yml, a circle.yml, or a similar file, to give the tool additional instructions. Even in such a case, that file tends to have only a few lines of specifications and resides together with the code, thus making it easy for the project team to manage it. However, these tools do not replace “the first generation” since they tend to work well only on small projects with a very simple pipeline. The “real” continuous delivery/deployment pipeline is much more complex than what those tools are capable of. In other words, we gained low maintenance but lost the power and, in many cases, flexibility.
Today, old-timers like Jenkins, Bamboo, and Team City, continue dominating the market and are recommended tools to use on anything but small projects, while cloud tools like Travis and CircleCI dominate smaller settings. At the same time, the team maintaining Jenkins codebase recognized the need to introduce a few important improvements that will bring it to the next level (I’ll call it the “third generation” of CI/CD tools). They introduced Jenkins Workflow and Jenkinsfile. Together, they bring some very useful and powerful features. With Jenkins Workflow, we can write a whole pipeline using Groovy-based DSL. The process can be written as a single script that utilizes most of the existing Jenkins features. The end result is a huge reduction in code (Workflow scripts are much smaller than traditional Jenkins job definitions in XML) and reduction in jobs (one Workflow job can substitute many traditional Jenkins jobs). This results in much easier management and maintenance. On the other hand, newly introduced Jenkinsfile allows us to define the Workflow script inside the repository together with the code. This means that developers in charge of the project can be in control of the CI/CD pipeline as well. That way, responsibilities are much better divided. Overall Jenkins management is centralized while individual CI/CD pipelines are placed where they belong (together with the code that should be moved through it). Moreover, if we combine all that with the Multibranch Workflow job type, we can even fine tune the pipeline depending on the branch. For example, we might have the full process defined in the Jenkinsfile residing in the master branch and shorter flows in each feature branch. What is put into each Jenkinsfile is up to those maintaining each repository/branch. With the Multibranch Workflow job, Jenkins will create jobs whenever a new branch is created and run whatever is defined in the file. Similarly, it will remove jobs when branches are removed. Finally, Docker Workflow has been introduced as well, making Docker the first class citizen in Jenkins.
All those improvements brought Jenkins to a completely new level confirming its supremacy among CI/CD platforms.
If even more is needed, there is the CloudBees Jenkins Platform – Enterprise Edition that provides amazing features, especially when we need to run Jenkins at scale.
The DevOps 2.0 Toolkit
If you liked this article, you might be interested in The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book. Among many other subjects, it explores Jenkins Workflow, Multibranch Workflow, and Jenkinsfile in much more detail.
This book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It’s about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It’s about scaling to any number of servers, design of self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.
In other words, this book envelops the whole microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We’ll use Docker, Kubernetes, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, and so on. We’ll go through many practices and, even more, tools.
Pingback: The Short History of CI/CD Tools | Continuous Delivery
Thanks for sharing this- good stuff! Keep up the great work, we look forward to reading more from you in the future!
Interesting but one of the first if not first CI/CD I setup at Sun Microsystems in 1993 Called SunPIT (Pre Integration testing). I also set up SunPATCH for kernel patches at Sun and HPNET for network testing using CI/CD at HP. I wrote these all in shell scripts and cron jobs. SunPIT was used from the early 90’s to after 2010. It was also one of the first clouds as it involved a cluster of all hardware to run standards and priority test suites on the latest code several times a day. Any failures and who ever checked in the code had to fix it immediately. So basically I created the first CI/CD tools and a early if not first cloud. Thee is not much public record just some Sun internal documents and resumes around about it.
Pingback: What Are the Rust Language Values? — I am Getting Rusty