From the FAQ, here
sudo sysctl -w vm.nr_hugepages=128
add following lines to /etc/security/limits.conf:
* soft memlock 262144
* hard memlock 262144
Articles, notes and random thoughts on Software Development and Technology
From the FAQ, here
sudo sysctl -w vm.nr_hugepages=128
add following lines to /etc/security/limits.conf:
* soft memlock 262144
* hard memlock 262144
After logging on to the ESXI host client web ui, the Unhandled Exception dialog has started to bug me, but I think it’s fixed in a patch release since when I first installed ESXI 6.5.
This how-to doc describes the steps for updating the Host Client using the Host Client web ui itself, via Host / Manage / Packages / Install Update and pasting the URL to the current/latest .vib file which you can get from here.
The Unhandled Exception issue described here is now fixed – the specific update I installed was:
http://download3.vmware.com/software/vmw-tools/esxui/esxui-signed-7119706.vib
I have a new MacBook Pro so have been setting up my most commonly used Ham Radio apps from scratch. Having tried various apps for digital modes and logging on the Mac, this time round I’m just installing the ones that I use most or found I preferred.
Here’s my run down of apps:
Installing and setting up Kibana to analyze some log files is not a trivial task. Kibana is just one part in a stack of tools typically used together:
Together these are referred to as the ELK stack.
Working through the install steps and many other how-to guides, another piece of the puzzle commonly referenced is Filebeat, which watches local files (e.g. logs) and ships them to either Elasticseach or Logstash. Installation instructions here.
I have an nginx access.log file that I’m interested in taking a look at for common patterns and trends. The flow and interaction of each of these tools in the stack looks like:
First I configured Logstash to define a simple pipeline for ingesting my log files – at this point I haven’t configured any filtering – this is in pipeline1.conf:
input { beats { port => "5043" } } # The filter part of this file is commented out to indicate that it is # optional. # filter { # # } output { elasticsearch { hosts => [ "192.168.1.94:9200" ] } }
This defines the incoming log data from Filebeat and outputing the result to Elasticsearch.
Start up Logstash with:
sudo bin/logstash --path.settings=/etc/logstash/ -f pipeline1.conf --config.reload.automatic
Now to configure Filebeat to push my nginx access.log into Logstash:
filebeat.prospectors: - type: log enabled: true paths: - /home/kev/vps1-logs/access.log output.logstash: hosts: ["192.168.1.94:5043"]
Start filebeat to upload the nginx access.log file (this is a local copy of the acess.log from my nginx server, but I understand you can use filebeat to transfer updates from your server to you ELK server on the fly):
/usr/share/filebeat/bin$ sudo ./filebeat -e -c /etc/filebeat/filebeat.yml -d "publish"
After the data had transferred, hitting the kibana site now I can start querying my log data from http://192.168.1.94:5601/
It looks like I might have some work to do to better tag my log data, but with the data imported, time to start checking out the query syntax.