The Evolution Of Jenkins Jobs And How We Got To The YAML-Based jenkins-x.yml Format

When Jenkins appeared, its pipelines were called FreeStyle jobs. There was no way to describe them in code, and they were not kept in version control. We were creating and maintaining those jobs through Jenkins UI by filling input fields, marking checkboxes, and selecting values from drop-down lists. The results were impossible-to-read XML files stored in the Jenkins home directory. Nevertheless, that approach was so great (compared to what existed at the time) that Jenkins become widely adopted overnight. But, that was many years ago and what was great over a decade ago is not necessarily as good today. As a matter of fact, FreeStyle jobs are the antithesis of the types of jobs we should be writing today. Tools that create code through drag-and-drop methods are extinct. Not having code in version control is a cardinal sin. Not being able to use our favorite IDE or code editor is unacceptable. Hence, the Jenkins community created Jenkins pipelines.

Jenkins pipelines are an attempt to rethink how we define Jenkins jobs. Instead of the click-type-select approach of FreeStyle jobs, pipelines are defined in Jenkins-specific Groovy-based domain specific language (DSL) written in Jenkinsfile and stored in code repositories together with the rest of the code of our applications. That approach managed to accomplish many improvements when compared to FreeStyle jobs. A single job could define the entire pipeline, the definition could be stored in the code repository, and we could avoid part of the repetition through the usage of Jenkins Shared Libraries. But, for some users, there was a big problem with Jenkins pipelines.

Using Groovy-based DSL was too big of a jump for some Jenkins users. Switching from click-type-select with FreeStyle jobs into Groovy code was too much. We had a significant number of Jenkins users with years of experience in accomplishing results by clicking and selecting options, but without knowing how to write code.

I’ll leave aside the discussion in which I’d argue that anyone working in the software industry should be able to write code no matter their role. Instead, I’ll admit that having non-coders transition from UI-based into the code-only type of Jenkins jobs posed too much of a challenge. Even if we would agree that everyone in the software industry should know how to write code, there is still the issue of Groovy.

The chances are that Groovy is not your preferred programming language. You might be working with NodeJS, Go, Python, .Net, C, C++, Scala, or one of the myriads of other languages. Even if you are Java developer, Groovy might not be your favorite. Given that the syntax is somehow similar to Java and the fact that both use JVM does not diminish the fact that Groovy and Java are different languages. Jenkins DSL tried to “hide” Groovy, but that did not remove the fact that you had to know (at least basic) Groovy to write Jenkins pipelines. That meant that many had to choose whether to learn Groovy or to switch to something else. So, even though Jenkins pipelines were a significant improvement over FreeStyle jobs, there was still work to be done to make them useful to anyone no matter their language preference. Hence, the community came up with the declarative pipeline.

Declarative pipelines are not really declarative. No matter the format, pipelines are by their nature sequential.

Declarative pipeline format is a simplified way to write Jenkinsfile definitions. To distinguish one from the other, we call the older pipeline syntax scripted, and the newer declarative pipeline. We got much-needed simplicity. There was no Groovy to learn unless we employ shared libraries. The new pipeline format was simple to learn and easy to write. It served us well, and it was adopted by static Jenkins X. And yet, serverless Jenkins X introduced another change to the format of pipeline definitions. Today, we can think of static Jenkins X with declarative pipelines as a transition towards serverless Jenkins X.

With serverless Jenkins X, we moved into pipelines defined in YAML. Why did we do that? One argument could be that YAML is easier to understand and manage. Another could claim that YAML is the golden-standard for any type of definition, Kubernetes resources being one example. Most of the other tools, especially newer ones, switched to YAML definitions. While those and many other explanations are valid and certainly played a role in making the decision to switch to YAML, I believe that we should look at the change from a different angle.

All Jenkins formats for defining pipelines were based on the fact that it will be Jenkins who will execute them. FreeStyle jobs used XML because Jenkins uses XML to store all sorts of information. A long time ago, when Jenkins was created, XML was all the rage. Scripted pipelines use Groovy DSL because pipelines need to interact with Jenkins. Since it is written in Java, Groovy is a natural choice. It is more dynamic than Java. It compiles at runtime allowing us to use it as a scripting mechanism. It can access Jenkins libraries written in Java, and it can access Jenkins itself at runtime. Then we added declarative pipeline that is something between Groovy DSL and YAML. It is a wrapper around the scripted pipeline.

What all those formats have in common is that they are all limited by Jenkins’ architecture. And now, with serverless Jenkins X, there is no Jenkins any more.

Saying that Jenkins is gone is not entirely correct. Jenkins lives in Jenkins X. The foundation that served us well is there. The experience from many years of being the leading CI/CD platform was combined with the need to solve challenges that did not exist before. We had to come up with a platform that is Kubernetes-first, cloud-native, fault-tolerant, highly-available, lightweight, with API-first design, and so on and so forth. The result is Jenkins X or, to be more precise, its serverless flavor. It combines some of the best tools on the market with custom code written specifically for Jenkins X. And the result is what we call serverless or next generation Jenkins X.

The first generation of Jenkins X (static flavor) reduced the “traditional” Jenkins to a bare minimum. That allowed the community to focus on building all the new tools needed to bring Jenkins to the next level. It reduced Jenkins’ surface and added a lot of new code around it. At the same time, static Jenkins X maintains compatibility with the “traditional” Jenkins. Teams can move to Jenkins X without having to rewrite everything they had before while, at the same time, keeping some level of familiarity.

Serverless Jenkins X is the next stage in the evolution. While static flavor reduced the role of the “traditional” Jenkins, serverless eradicated it. The end result is a combination of Prow, Jenkins Pipeline Operator, Tekton, and quite a few other tools and processes. Some of them (e.g., Prow, Tekton) are open source projects bundled into Jenkins X while others (e.g., Jenkins Pipeline Operator) are written from scratch. On top of those, we got jx as the CLI that allows us to control any aspect of Jenkins X.

Given that there is no “traditional” Jenkins in the serverless flavor of Jenkins X, there is no need to stick with the old formats to define pipelines. Those that do need to continue using Jenkinsfile can do so by using static Jenkins X. Those who want to get the most benefit from the new platform will appreciate the benefits of the new YAML-based format defined in jenkins-x.yml. More often than not, organizations will combine both. There are use cases when static Jenkins with the support for Jenkinsfile is a good choice, especially in cases when projects already have pipelines running in the “traditional” Jenkins. On the other hand, new projects can be created directly in serverless Jenkins X and use jenkins-x.yml to define pipelines.

Unless you just started a new company, there are all sorts of situations and some of them might be better fulfilled with static Jenkins X and Jenkinsfile, others with serverless Jenkins X and jenkins-x.yml, while there are likely going to exist projects that started a long time ago and do not see enough benefit to change. Those can stay in “traditional” Jenkins running outside Kubernetes or in any other tool they might be using.

To summarize, static Jenkins is a “transition” between the “traditional” Jenkins and the final solution (serverless flavor). The former has reduced Jenkins to a bare minimum, while the latter consists of entirely new code written specifically to solve problems we’re facing today and leveraging all the latest and greatest technology can offer.

So, Jenkins served us well, and it will continue living for a long time since many applications were written a while ago and might not be good candidates to embrace Kubernetes. Static Jenkins X is for all those who want to transition to Kubernetes without losing all their investment (e.g., Jenkinsfile). Serverless Jenkins X is for all those who seek the full power of Kubernetes and want to be genuinely cloud-native.

For now (April 2019), serverless Jenkins X works only with GitHub. Until that is corrected, using any other Git platform might be yet another reason to stick with static Jenkins X. The rest of the text will assume that you do use GitHub or that you can wait for a while longer until the support for other Git platforms is added.

Long story short, Serverless Jenkins X uses YAML in jenkins-x.yml to describe pipelines, while more traditional Jenkins as well as static Jenkins X rely on Groovy DSL defined in Jenkinsfile. FreeStyle jobs are deprecated for quite a long time, so we’ll ignore their existence. Whether you prefer Jenkinsfile or jenkins-x.yml format will depend on quite a few factors, so let’s break down those that matter the most.

If you already use Jenkins, you are likely used to Jenkinsfile and might want to keep it for a while longer.

If you have complex pipelines, you might want to stick with the scripted pipeline in Jenkinsfile. That being said, I do not believe that anyone should have complex pipelines. Those that do usually tried to solve problems in the wrong place. Pipelines (of any kind) are not supposed to have complex logic. Instead, they should define orchestration of automated tasks defined somewhere else. For example, instead of having tens or hundreds of lines of pipeline code that defines how to deploy our application, we should move that logic into a script and simply invoke it from the pipeline. That logic is similar to what jx promote does. It performs semi-complex logic, but from the pipeline point of view it is a simple step with a single command. If we do adopt that approach, there is no need for complex pipelines and, therefore, there is no need for Jenkins’ scripted pipeline. Declarative is more than enough when using Jenkinsfile.

If you do want to leverage all the latest and greatest that Jenkins X (serverless flavor) brings, you should switch to YAML format defined in the jenkins-x.yml file.

Therefore, use Jenkinsfile with static Jenkins X if you already have pipelines and you do not want to rewrite them. Stop using scripted pipelines as an excuse for misplaced complexity. Adopt serverless Jenkins X with YAML-based format for all other cases.

The DevOps 2.6 Toolkit: Jenkins X

The article you just read is an extract from The DevOps 2.6 Toolkit: Jenkins X.

The book is still in progress, and I do not yet have a clearly defined scope. I write about tech I’m working with and that interests me the most. Right now, that’s Jenkins X.

You can get the book from LeanPub. If you do, you’ll get updates whenever a new chapter is finished. At the same time, you can get more actively involved and send me your comments, suggestions for the next topics, bug reports, and so on. I’d love to hear back from you.


2 thoughts on “The Evolution Of Jenkins Jobs And How We Got To The YAML-Based jenkins-x.yml Format

  1. firmsoil

    Wonderful perspective and great insights on the evolution of Jenkins. It sounds a bit like a pre-obituary of “Pipeline as Code” of any significant complexity as the message seems to be that Jenkins-X is becoming like the dreaded IT Manager that delegates all the hard automation work to the tool-chain (that does all the real work) and takes credit for reporting the results (mimicking the real corporate world).

  2. Pingback: Why YAML is Important for Azure DevOps – RazorSPoint

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s