Logs are an essential aspect of server and application management. They help identify issues, troubleshoot problems, and monitor performance. However, with multiple servers and applications like VPN, DNS, HAProxy, or Home Assistant running, monitoring logs can become a challenging task.
Especially when analyzing requests through multiple Docker microservices.

This is where Docker Log Monitoring with Kibana and Elasticsearch comes into play.
Especially, Kibana and Elasticsearch are popular open-source tools for log monitoring, analytics, and visualization.

In the following sections, we focus on the steps to implement log monitoring with Kibana and Elasticsearch.

Start Elasticsearch

To get started, run the following Docker command to start the Elasticsearch container.

docker run -p 9200:9200 -p 9300:9300 --name elasticsearch -e "discovery.type=single-node" -d  docker.elastic.co/elasticsearch/elasticsearch:8.6.2

There is an important environment parameter used in the Docker run command.

  • -e "discovery.type=single-node": sets a configuration option for Elasticsearch to operate in a single-node cluster mode.

Start and configure Kibana

Further to start the Kibana Docker container use the following command.

docker run --name kibana -d -p 5601:5601 docker.elastic.co/kibana/kibana:8.6.2

To receive the URL of your Kibana instance including the setup code parameter, take a look at the Docker container log using docker logs kibana.
You should find a log entry like this:

[2023-04-13T18:51:39.134+00:00][INFO ][http.server.Preboot] http server running at
[2023-04-13T18:51:39.174+00:00][INFO ][plugins-system.preboot] Setting up [1] plugins: [interactiveSetup]
[2023-04-13T18:51:39.176+00:00][INFO ][preboot] "interactiveSetup" plugin is holding setup: Validating Elasticsearch connection configuration…
[2023-04-13T18:51:39.212+00:00][INFO ][root] Holding setup until preboot stage is completed.

Kibana has not been configured.

Go to to get started.

Visiting the logged URL allows you to configure your Kibana container and connect it to the running Elasticsearch container.

Kibana Configuration Landing Page

Generate an Elasticsearch enrollment token by executing the provided Elasticsearch script inside the running Docker container.

docker exec -it elasticsearch /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token --scope kibana

After pasting the token to Kibana, the configuration should be finished after a few seconds, and you will be redirected to the login screen.

To generate a new password for the default user ‘elastic’, use:
docker exec -it elasticsearch-blog /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic

Now your centralized Elasticsearch and Kibana instance is running and connected.

Check the Kibana configuration file (/usr/share/kibana/config/kibana.yml) inside the Kibana docker container, to make sure that the elastic container is correctly addressed (elasticsearch.hosts).

Next, you need to fill Elasticsearch with the logs you wish to monitor.
One option that can be used to ship logs is Filebeat.


Filebeat is an open-source data shipping tool that allows you to collect, process, and ship logs or other data from various sources to Elasticsearch. It can be used for monitoring system logs, application logs, and network logs, and is especially useful for monitoring logs from Docker containers.

Before starting the Filebeat as a Docker container, create a configuration file that will be mounted as a volume to the Filebeat container.

- type: container
    - '/var/lib/docker/containers/*/*.log'
- add_docker_metadata:
    host: "unix:///var/run/docker.sock"

- decode_json_fields:
    fields: ["message"]
    target: "json"
    overwrite_keys: true

  hosts: ["https://HOST:9200"]
  username: elastic
  password: XXXXXXX
  ssl.verification_mode: none
    - index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"

logging.json: true
logging.metrics.enabled: false 

You can also ship log files from the filesystem by adding the path to the files that should be shipped.

- type: log
    - '/syslog'

After starting the Filebeat container, it will be able to read all the logs generated by Docker containers and ship them to Elasticsearch.

docker run -d --name filebeat -v "/var/run/docker.sock:/var/run/docker.sock:ro" -v "/var/lib/docker:/var/lib/docker:ro" -v /PATH/filebeat.yml:/usr/share/filebeat/filebeat.yml docker.elastic.co/beats/filebeat:8.6.2

Here’s a breakdown of the important options used in the above command:

  • -v "/var/run/docker.sock:/var/run/docker.sock:ro" mounts the Docker socket file into the container as read-only. This allows Filebeat to access Docker logs.
  • -v "/var/lib/docker:/var/lib/docker:ro" mounts the Docker data directory into the container as read-only. This allows Filebeat to access container metadata.
  • -v /PATH/filebeat.yml:/usr/share/filebeat/filebeat.yml mounts the configuration file created before from the host machine into the container.

Visualize Logs in Kibana

In order to visualize your logs in Kibana, you can create a datastream (formerly index pattern) that matches the index name used by Filebeat when sending logs to Elasticsearch.

Once the datastream is created, you can use Kibana’s powerful search and filter capabilities to find the logs you need and create visualizations or dashboards to display the data.


Finally centralized Docker Log Monitoring with Kibana, Elastic Search, and Filebeat is a crucial aspect of server and application management. It helps identify issues, troubleshoots problems, and monitor performance. By centralizing and visualizing logs, you can quickly identify and troubleshoot issues, ensuring the health and performance of your applications and servers.

Categories: DevOpsInfrastructure