[go: up one dir, main page]

Skip to content

nin9s/elk-hole

Repository files navigation

elk-hole

Donate

Pi-hole data visualization using Elasticsearch, Logstash and Kibana

elk-hole provides the relevant files and configuration to easily visualize pi-holes/dnsmasq statistics via the popular elasticstack.

Show, search, filter and customize pi-hole statistics ... the elk way

Requirements:

Working Installation of:

  1. logstash (currently tested up to version "7.9")
  2. elasticsearch(currently tested up to version with "7.9")
  3. kibana(currently tested up to version "7.9")
  4. filebeat on pi-hole (tested with "1.3.1", "7.1.1" & "7.9")

For official installation guides of the elk stack - refer to [Elastic](https://www.elastic.co/ for details)

For a quick setup, check out easyELK

Elk-hole provides the relevant files and configuration for sending the pi-hole logs via filebeat directly to logstash/elasticsearch. We will then visualize the logs in kibana with a custom dashboard.

The result will look like this:

alt text

Alternative:

alt text

HOW TO USE

LOGSTASH HOST

  1. Download the files from Elk-hole repo
  2. From the downloaded files, copy 20-dns-syslog.conf to /etc/logstash/conf.d/ and /patterns to /etc/logstash/ to your logstash system.

Your files should be like this:

/etc/logstash/conf.d/20-dns-syslog.conf

/etc/logstash/patterns/dns

If you have other files in this folder make sure to properly edit the input/output/filter sections to avoid matching our filebeat dns logs in these files which may be processed earlier. For testing purposes you can name your conf files like so:

/conf.d/20-dns-syslog.conf
/conf.d/30-other1.conf
/conf.d/40-other2.conf

This makes sure that /conf.d/20-dns-syslog.conf is processed at the beginning.

3.a Using vim or nano, open/edit 20-dns-syslog.conf. You may want to scroll down to the date section and change timezone to match your local time.

3.b Head to the output section and set ELASTICSEARCHHOST:PORT to match your environment. If elasticsearch is running on the same system as logstash, then 127.0.0.1:9200 should work.

  1. Restart logstash - systemctl restart logstash.service

PI-HOLE Host

  1. From the downloaded files, copy filebeat.yml to your /etc/filebeat/ and copy 99-pihole-log-facility.conf to /etc/dnsmasq.d/
  2. Using vim or nano, open/edit the hosts: line and enter the IP address of the logstash system LOGSTASH IP:5141
  3. Restart filebeat systemctl restart filebeat.service
  4. Important: Restart pi-hole and ensure filebeat is sending logs to logstash before proceeding further. pihole restartdns
  5. You can verify this filebeat is running properly with the following two steps
  6. service filebeat status The output should show a couple key message. Active: active (running) & Connection established
  7. sudo filebeat test output should show:
Logstash: <Logstash IP>:5141...
  Connection..
    Parse hosts... OK
    Dns lookup...  OK
    Addresses: <Logstash IP>
    Dial up... OK
  TLS... WARN secure connection disabled
  Talk to server... OK

The following steps on the Kibana Host will not work correctly if sending data to logstash is not successfull!

KIBANA HOST (CAN BE THE SAME AS LOGSTASH AND ELASTICSEARCH)

  1. Browse to the Kibana management interface using a web browser http://Kibana IP:5601
  2. Go to Management --> Kibana --> Index Patterns and click Create the index pattern
  3. Type logstash-syslog-dns* - It should find one index
  4. Click next step and select @timestamp
  5. Create index pattern
  6. Once the index is created, verify that 79 fields are listed
  7. Click the curved arrows on the top right to refresh the index fields. This is important because this will not automatically happen.
  8. Browse to Management --> Kibana --> Saved Objects
  9. Select Import (You will repeat this step)
  10. From the downloaded files, locate the json (or ndjson if you are using a recent version of elk) folder and import the following files depending on your software version (1.3.1 or 7.x)

elk-hole - vis.json

elk-hole - vis_enhanced.json

elk-hole - vis_enhanced_fix.json

elk-hole - dash.json

elk-hole - dash_enhanced.json

Note: When you import these files, you could possibly see a message "Index Pattern Conflicts". This is ok. Below that message you may see one or two rows of data. On each row click on the drop down menu and select "logstash-syslog-dns*"

  1. Browse to Dev Tools (wrench on left navigation)
  2. When Dev Tools comes up, there will be two columns when you are in the section with "Console" underlined.
  3. Delete any existing data in the left column
  4. From the downloaded files, locate logstash-syslog-dns-index.template_ELK7.x.json
  5. Open that file in a text editor on your system
  6. Copy the entire contents of the file
  7. Paste the content of: logstash-syslog-dns-index.template_ELK7.x.json into kibanas dev tools console
  8. Click the green triangle in the upper right of the pasted content (first line). Output should be:
{
  "acknowledged" : true 
}
  1. As a precaution restart the whole elk stack
systemctl restart logstash.service 
systemctl restart elasticsearch.service
systemctl restart kibana.service

Important: Please also re-index after setting everything up and also refresh kibanas index field list

You should then be able to see your new dashboard and visualizations.