This article is part of the series that compares Kubernetes and Docker Swarm features.
- Kubernetes Pods, ReplicaSets, And Services Compared To Docker Swarm Stacks
- Kubernetes Deployments Compared To Docker Swarm Stacks
- Kubernetes Ingress Compared To Docker Swarm Equivalent
- Kubernetes ConfigMaps Compared To Docker Swarm Configs
- Kubernetes Secrets Compared To Docker Swarm Secrets
- Kubernetes Namespaces Compared To Docker Swarm Equivalent (If There Is Any)
- Kubernetes RBAC Compared To Docker Swarm RBAC
- Kubernetes Resource Management Compared To Docker Swarm Equivalent
If you already used Docker Swarm, the logic behind Kubernetes Deployments should be familiar. Both serve the same purpose and can be used to deploy new applications or update those that are already running inside a cluster. In both cases, we can easily deploy new releases without any downtime (when application architecture permits that).
However, unlike the previous comparison between Kubernetes Pods, ReplicaSets, And Services, on the one hand, and Docker Swarm Stacks on the other, Deployments do provide a few potentially important functional differences. But, before we dive into functionals comparison, we’ll take a moment to explore differences in how we define objects.
An example Kubernetes Deployment and Services definition is as follows.
apiVersion: apps/v1beta2 kind: Deployment metadata: name: go-demo-2-db spec: selector: matchLabels: type: db service: go-demo-2 strategy: type: Recreate template: metadata: labels: type: db service: go-demo-2 vendor: MongoLabs spec: containers: - name: db image: mongo:3.3 ports: - containerPort: 28017 --- apiVersion: v1 kind: Service metadata: name: go-demo-2-db spec: ports: - port: 27017 selector: type: db service: go-demo-2 --- apiVersion: apps/v1beta2 kind: Deployment metadata: name: go-demo-2-api spec: replicas: 3 selector: matchLabels: type: api service: go-demo-2 template: metadata: labels: type: api service: go-demo-2 language: go spec: containers: - name: api image: vfarcic/go-demo-2 env: - name: DB value: go-demo-2-db readinessProbe: httpGet: path: /demo/hello port: 8080 periodSeconds: 1 livenessProbe: httpGet: path: /demo/hello port: 8080 --- apiVersion: v1 kind: Service metadata: name: go-demo-2-api spec: type: NodePort ports: - port: 8080 selector: type: api service: go-demo-2
Equivalent Docker Swarm stack definition is as follows.
version: "3" services: api: image: vfarcic/go-demo-2 environment: - DB=db ports: - 8080 deploy: replicas: 3 db: image: mongo:3.3
Both definitions provide, more or less, the same functionality.
It is evident that Kubernetes Deployment requires much longer definition with more complicated syntax. It is worth noting that Swarm’s equivalent to
livenessProbe is not present because it is defined as a
HEALTHCHECK inside Dockerfile. Still, even if we remove them, Kubernetes Deployment continues being longer and more complicated.
When comparing only the differences in the ways to define objects, Docker Swarm is a clear winner. Let’s see what we can conclude from the functional point of view.
Creating the objects is reasonably straight-forward. Both
kubectl create and
docker stack deploy will deploy new releases without any downtime. New containers or, in case of Kubernetes, Pods will be created and, in parallel, the old ones will be terminated with the equal frequency. So far, both solutions are, more or less, the same.
One of the differences is what happens in case of a failure. Kubernetes Deployment will not perform any corrective action in case of a failure. It’ll stop the update leaving a combination of new and old containers running in parallel. Docker Swarm, on the other hand, can be configured to rollback automatically. That might seem like another win for Docker Swarm. However, Kubernetes has something Swarm doesn’t. We can use
kubectl rollout status command to find out whether the update was successful or it failed and, in case of the latter, we can
rollout. Even though we need a few commands to accomplish the same result, that might fare better when updates are automated. Knowing whether an update succeeded or failed allows us to not only execute a subsequent rollback action but also notify someone that there is a problem.
Both approaches have their pros and cons. Docker Swarm’s automated rollback is better suited in some cases, and Kubernetes update status works better in others. The methods are different, and there is no clear winner, so I’ll proclaim it a tie.
Kubernetes Deployments can record history. We can use the
kubectl rollout history command to inspect past rollout. When updates are working as expected,
history is not very useful. But, when things go wrong, it might indeed provide additional insight. That can be combined with the ability to rollback to a specific revision, not necessarily the previous one. However, most of the time, we rollback to the previous version. The ability to go back further in time is not very useful. Even when such a need arises, both products can do that. The difference is that Kuberentes Deployments allow us to go to a specific revision (e.g., we’re on the revision five, rollback to the revision two). With Docker Swarm, we’d have to issue a new update (e.g., update the image to the tag 2.0). Since containers are immutable, the result is the same, so the difference is only in the syntax behind a rollback.
Ability to rollback to a specific version or a tag exists in both products. We can argue which syntax is more straightforward or more useful. The differences are minor, and I’ll proclaim that there is no winner for that functionality. It’s another tie.
Since almost everything in Kubernetes is based on label selectors, it has a feature that Docker Swarm doesn’t. We can update multiple Deployments at the same time. We can, for example, issue an update (
kubetl set image) that uses filters to find all Mongo databases and upgrade them to a newer release. It is a feature that would require a few lines of bash scripting with Docker Swarm. However, while the ability to update all Deployments that match specific labels might sound like a useful feature, it often isn’t. More often than not, such actions can produce undesirable effects. If, for example, we have five back-end applications that use Mongo database (one for each), we’d probably want to upgrade them in a more controlled fashion. Teams behind those services would probably want to test each of those upgrades and give their blessings. We probably wouldn’t wait until all are finished, but upgrade a single database when the team in charge of it feels confident. Never the less, there are the cases when such a feature is useful so I must give this one to Kubernetes. It a minor win, but it still counts.
There are a few other things that are easier to accomplish with Kubernetes. For example, due to the way Kubernetes Services work, creating a blue-green deployment process, instead of using rolling updates, is much easier. However, such a process falls into advanced usage so I’ll leave it out of this comparison. It’ll (probably) come later.
It’s difficult to say which solution provides better results. Docker Swarm continues to shine from the user-friendliness perspective. It is much simpler and easier to write a Docker Swarm stack file than a Kubernetes Deployment definition. On the other hand, Deployments offer a few additional functional features that Swarm does not have. However, those features are, for most use cases, of minor importance. Those that indeed matter are, more or less, the same.
Don’t make a decision based on the differences between Kubernetes Deployments and Docker Swarm stacks. Definition syntax is where Swarm has a clear win, while on the functional front Kubernetes has a tiny edge over Swarm. If you’d make a decision only based on deployments, Swarm might be a slightly better candidate. Or not. It all depends on what matters more in your case. Do you care about YAML syntax? Are those additional Kubernetes Deployment features something you will ever use?
In any case, Kubernetes has much more to offer, and any conclusion based on such a limited comparison scope is bound to be incomplete. We only scratched the surface. Stay tuned for more.
The DevOps 2.3 Toolkit: Kubernetes
The article you just read is an extract from The DevOps 2.3 Toolkit: Kubernetes.
The goal of the book is not to convince you to adopt Kubernetes but to provide a detailed overview of its features. I want you to become confident in your Kubernetes knowledge and only then choose whether to embrace it. That is, unless you already made up your mind and stumbled upon this book in search of Kubernetes guidance.
The book is about running containers at scale and not panicking when problems arise. It is about the present and the future of software deployment and monitoring. It’s about embracing the challenges and staying ahead of the curve.
Give it a try and let me know what you think.
Pingback: Kubernetes Ingress Compared To Docker Swarm Equivalent | Technology Conversations
Pingback: Kubernetes Pods, ReplicaSets, And Services Compared To Docker Swarm Stacks | Technology Conversations
Pingback: KubeWeekly #118 – KubeWeekly
Pingback: KubeWeekly #118 – KubeWeekly
Pingback: Kubernetes ConfigMaps Compared To Docker Swarm Configs | Technology Conversations
Pingback: Kubernetes Secrets Compared To Docker Swarm Secrets | Technology Conversations
Pingback: Kubernetes Namespaces Compared To Docker Swarm Equivalent (If There Is Any) | Technology Conversations
Pingback: Kubernetes RBAC Compared To Docker Swarm RBAC | Technology Conversations
Pingback: Kubernetes Resource Management Compared To Docker Swarm Equivalent | Technology Conversations