Installing and configuring ELK: Elasticsearch + Logstash + Kibana (with filebeat)

Installing and setting up Kibana to analyze some log files is not a trivial task. Kibana is just one part in a stack of tools typically used together:

  • Elasticsearch: full text search engine. Installation instructions here
  • Logstash: data collection and filtering. Installation instructions here
  • Kibana: analytics and visualization platform. Installation instructions here

Together these are referred to as the ELK stack.

Working through the install steps and many other how-to guides, another piece of the puzzle commonly referenced is Filebeat, which watches local files (e.g. logs) and ships them to either Elasticseach or Logstash. Installation instructions here.

I have an nginx access.log file that I’m interested in taking a look at for common patterns and trends. The flow and interaction of each of these tools in the stack looks like:

  • Filebeat -> logstash -> elasticsearch -> kibana

First I configured Logstash to define a simple pipeline for ingesting my log files – at this point I haven’t configured any filtering – this is in pipeline1.conf:

input {
    beats {
        port => "5043"
    }
}
# The filter part of this file is commented out to indicate that it is
# optional.
# filter {
#
# }
output {
    elasticsearch {
        hosts => [ "192.168.1.94:9200" ]
    }
}

This defines the incoming log data from Filebeat and outputing the result to Elasticsearch.

Start up  Logstash with:

sudo bin/logstash --path.settings=/etc/logstash/ -f pipeline1.conf --config.reload.automatic

Now to configure Filebeat to push my nginx access.log into Logstash:

filebeat.prospectors:
- type: log
  enabled: true
  paths:
    - /home/kev/vps1-logs/access.log

output.logstash:
  hosts: ["192.168.1.94:5043"]

Start filebeat to upload the nginx access.log file (this is a local copy of the acess.log from my nginx server, but I understand you can use filebeat to transfer updates from your server to you ELK server on the fly):

/usr/share/filebeat/bin$ sudo ./filebeat -e -c /etc/filebeat/filebeat.yml -d "publish"

After the data had transferred, hitting the kibana site now I can start querying my log data from http://192.168.1.94:5601/

It looks like I might have some work to do to better tag my log data, but with the data imported, time to start checking out the query syntax.

gitlab service control commands

After installing gitlab from the omnibus install, use the gitlab-ctl command to query status, and start/stop the gitlab service (see here):

$ sudo gitlab-ctl status

If gitlab’s main service has been disabled, all the sub-services will report ‘runsv not running’:

fail: gitaly: runsv not running

You can reset the main service to run at startup with (see here):

sudo systemctl enable gitlab-runsvdir.service

To disable startup at boot:

sudo systemctl disable gitlab-runsvdir.service

If runsvdir is not enabled to start at boot, then start with:

sudo systemctl start gitlab-runsvdir.service

To start/stop gitlab:

$ sudo gitlab-ctl start

$ sudo gitlab-ctl stop

AWS IoT and Node.js on the Raspberry Pi

There are many approaches for installing node.js on the Raspberry Pi (Google and you’ll find lots of guides), presumably because for a while there didn’t seem to be any official binaries in the official apt repos so people were building and sharing their own.

I installed a version from somewhere (can’t actually remember where as it was a while back) and it doesn’t support ES6 class syntax used by some of the dependent libraries in the AWS IoT SDK:

$ node index.js
/home/pi/aws-iot-nodejs-pi-lights/node_modules/aws-iot-device-sdk/node_modules/mqtt/node_modules/websocket-stream/server.js:6
class Server extends WebSocketServer{
^^^^^
SyntaxError: Unexpected reserved word
    at Module._compile (module.js:439:25)
    at Object.Module._extensions..js (module.js:474:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)
    at Module.require (module.js:364:17)
    at require (module.js:380:17)
    at Object.<anonymous> (/home/pi/aws-iot-nodejs-pi-lights/node_modules/aws-iot-device-sdk/node_modules/mqtt/node_modules/websocket-stream/index.js:2:14)
    at Module._compile (module.js:456:26)
    at Object.Module._extensions..js (module.js:474:10)
    at Module.load (module.js:356:32)

The version I currently have installed is:

pi@raspberrypi:~ $ node -v
v0.10.29

Since I’m not sure where this version came from originally, (and apt-get upgrade is not finding any updates), I uninstalled:

sudo apt-get remove nodejs
sudo apt-get remove npm

Then I followed the steps in the AWS IoT SDK guide here to install using the version provided from Adafruit’s repo  (official node.js binaries for ARM are also available from nodejs.org here).

With version provided from Adafruit, this gives v 0.12.6 but unfortunately this still gives the same error with the ES6 class keyword.

$ node -v
v0.12.6

Next, lets try the ARM version from nodejs.org. There’s step by step instructions here showing how to download the tar, extract and copy to /usr/local/

Now we have:

$ node -v
v8.9.1

And now trying to run my AWS IoT node.js based app, success!

 

 

Installing SSL certificates for Nginx on Ubuntu

Purchasing an SSL certificate requires creating a Certificate Signing Request (CSR) which you can do on your host using:

openssl req -new -newkey rsa:2048 -nodes -keyout yourdomain.key -out yourdomain.csr

When you purchase your certificate from your vendor, you’ll provide the text content from your CSR file. Once you have the certificate files (normally a .crt and a .key file), transfer them to your server, and place them somewhere like /etc/ssl-certs/.

In your /etc/nginx/nginx.conf (or /etc/nginx/sites-enabled/default), add to the server {  } block:

server {
  listen 443 ssl;
  ssl on;
  ssl_certificate     /etc/ssl-certs/yourdomain_com.crt;
  ssl_certificate_key /etc/ssl-certs/yourdomain.com.key;
  ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers         HIGH:!aNULL:!MD5;
        
  # rest of server config
}

Restart nginx with:

sudo service nginx restart

This is documented in the nginx docs here.