Tag Archives: ElasticSearch

What Should We Expect From Centralized Logging?

There are quite a few candidates for your need for centralized logging. Which one should you choose? Will it be Papertrail, Elasticsearch-Fluentd-Kibana stack (EFK), AWS CloudWatch, GCP Stackdriver, Azure Log Analytics, or something else?

When possible and practical, I prefer a centralized logging solution provided as a service, instead of running it inside my clusters. Many things are easier when others are making sure that everything works. If we use Helm to install EFK, it might seem like an easy setup. However, maintenance is far from trivial. Elasticsearch requires a lot of resources. For smaller clusters, compute required to run Elasticsearch alone is likely higher than the price of Papertrail or similar solutions. If I can get a service managed by others for the same price as running the alternative inside my own cluster, service wins most of the time. But, there are a few exceptions.
Continue reading

Advertisement

Collecting Metrics and Monitoring Docker Swarm Clusters

In the Forwarding Logs From All Containers Running Anywhere Inside A Docker Swarm Cluster article, we managed to add centralized logging to our cluster. Logs from any container running inside any of the nodes are shipped to a central location. They are stored in ElasticSearch and available through Kibana. However, the fact that we have easy access to all the logs does not mean that we have all the information we would need to debug a problem or prevent it from happening in the first place. We need to complement our logs with the rest of the information about the system. We need much more than what logs alone can provide.
Continue reading

Forwarding Logs From All Containers Running Anywhere Inside A Docker Swarm Cluster

In this article, we’ll discuss a way to forward logs from containers created as Docker Swarm services inside our clusters. We’ll use the ELK stack. They’ll be forwarded from containers to LogStash and, from there, to ElasticSearch. Once in the database, they will be available through Kibana.
Continue reading

Centralized Logging and Monitoring

I have so much chaos in my life, it’s become normal. You become used to it. You have just to relax, calm down, take a deep breath and try to see how you can make things work rather than complain about how they’re wrong.

— Tom Welling

Monitoring many services on a single server poses some difficulties. Monitoring many services on many servers requires a whole new way of thinking and a new set of tools. As you start embracing microservices, containers, and clusters, the number of deployed containers will begin increasing rapidly. The same holds true for servers that form the cluster. We cannot, anymore, log into a node and look at logs. There are too many logs to look at. On top of that, they are distributed among many servers. While yesterday we had two instances of a service deployed on a single server, tomorrow we might have eight instances deployed to six servers. The same holds true for monitoring. Old tools, like Nagios, are not designed to handle constant changes in running servers and services. We already used Consul that provides a different, not to say new, approach to managing near real-time monitoring and reaction when thresholds are reached. However, that is not enough. Real-time information is valuable to detect that something is wrong, but it does not give us information why the failure happened. We can know that a service is not responding, but we cannot know why.
Continue reading

The DevOps 2.0 Toolkit

Today is an exciting day for me. I just decided that the book I spent the last eight months writing is ready for general public.

What made me write the book? Certainly not the promise of wealth since, as any author of technical books will confirm, there is no money that can compensate the number of hours involved in writing a technical book. The reasons behind this endeavor are of a different nature. I realized that this blog is a great way for me to explore different subjects and share my experience with the community. However, due to the format, blog posts do not give enough space to explore, in more details, subjects I’m passionate about so, around eight months ago, I decided to start working on The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book. It treats similar subjects as those I write about in this blog, but with much more details. More importantly, the book allowed me to organize my experience into a much more coherent story.

The book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It’s about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It’s about scaling to any number of servers, designing self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.

In other words, this book envelops the whole microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We’ll use Docker, Kubernetes, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, and so on. We’ll go through many practices and, even more, tools.

At this moment, around 70% is finished and you’ll receive regular updates if you decide to purchase the book. The truth is that my motivation for writing the book is the same as with this blog. I like sharing my experience and this book is one more way to accomplish that. You can set your own price and if you feel that the minimum amount is still too high, please send me a private message and I’ll get back to you with a free copy.

Please give The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices a try and let me know what you think. Any feedback is welcome and appreciated.

Centralized System and Docker Logging with ELK Stack

With Docker there was not supposed to be a need to store logs in files. We should output information to stdout/stderr and the rest will be taken care by Docker itself. When we need to inspect logs all we are supposed to do is run docker logs [CONTAINER_NAME].

With Docker and ever more popular usage of micro services, number of deployed containers is increasing rapidly. Monitoring logs for each container separately quickly becomes a nightmare. Monitoring few or even ten containers individually is not hard. When that number starts moving towards tens or hundreds, individual logging is unpractical at best. If we add distributed services the situation gets even worst. Not only that we have many containers but they are distributed across many servers.

The solution is to use some kind of centralized logging. Our favourite combination is ELK stack (ElasticSearch, LogStash and Kibana). However, centralized logging with Docker on large-scale was not a trivial thing to do (until version 1.6 was released). We had a couple of solutions but none of them seemed good enough.
Continue reading