Logging in Kubernetes using Elasticsearch: The easy way

Monday, Nov 26, 2018| Tags: k8s, kubernetes, logging, elasticsearch, fluentd, fluent-bit

In my conversations with various development teams I regularly come across this common question of how to do logging on kubernetes.

While there are existing solutions like EFK (elasticsearch, fluentd, kibana) stack, it takes good amount of effort for setting these up and making them work. I was wondering if I can provide some easy steps to people who just want to get started with logging in not so cumbersome and tedious way.

So, let’s look at how to set up “logging in kubernetes the easy way”.

Prerequisites

  1. A running kubernetes cluster. You can setup one using kops, eks (on AWS), or GKE (on GCP)
  2. Helm installed. If you don’t already have it follow this tutorial to get up and running.

fluent-bit

Before we start setting up our infrastructure, I chose this instead of using fluentd directly was the fact that it enriches the metadata of logs by querying API server. It adds following to the logs:

  1. POD Name
  2. POD ID
  3. Container Name
  4. Container ID
  5. Labels
  6. Annotations

That is pretty neat to get out of the box.

Installation

Below are the steps to setup a basic installation that will get you up an running in 5 minutes. For production use you might want to modify various parameters of the helm charts (e.g. size of elasticsearch persistent volume claim. Default is 30 GB).

Create a namespace logs and install elasticsearch

$ helm install stable/elasticsearch --name=elasticsearch --namespace=logs

Install fluent-bit and pass the elasticsearch service endpoint to it during installation. This chart will install a daemonset that will start a fluent-bit pod on each node. This is the workhorse of log collection. It will pick the logs from the host node and push it to elasticsearch.

$ helm install stable/fluent-bit --name=fluent-bit --namespace=logs --set backend.type=es --set backend.es.host=elasticsearch-client 
.

At this point you will have logs collecting in elasticsearch. Now you will want to search logs. This is where you install kibana.

$ helm install stable/kibana --name=kibana --namespace=logs --set env.ELASTICSEARCH_HOSTS=http://elasticsearch-client:9200 
.

Now you have everything setup.

Accessing logs

To reach the kibana dashboard make it available on your machine locally by forwarding the port.

$ kubectl -n logs port-forward svc/kibana 9080:443

You should now be able to access kibana dashboard on your machine by pointing your browser at http://localhost:9080

You are all setup.



Comments