From the FAQ, here
sudo sysctl -w vm.nr_hugepages=128
add following lines to /etc/security/limits.conf:
* soft memlock 262144
* hard memlock 262144
Articles, notes and random thoughts on Software Development and Technology
From the FAQ, here
sudo sysctl -w vm.nr_hugepages=128
add following lines to /etc/security/limits.conf:
* soft memlock 262144
* hard memlock 262144
After logging on to the ESXI host client web ui, the Unhandled Exception dialog has started to bug me, but I think it’s fixed in a patch release since when I first installed ESXI 6.5.
This how-to doc describes the steps for updating the Host Client using the Host Client web ui itself, via Host / Manage / Packages / Install Update and pasting the URL to the current/latest .vib file which you can get from here.
The Unhandled Exception issue described here is now fixed – the specific update I installed was:
http://download3.vmware.com/software/vmw-tools/esxui/esxui-signed-7119706.vib
Installing and setting up Kibana to analyze some log files is not a trivial task. Kibana is just one part in a stack of tools typically used together:
Together these are referred to as the ELK stack.
Working through the install steps and many other how-to guides, another piece of the puzzle commonly referenced is Filebeat, which watches local files (e.g. logs) and ships them to either Elasticseach or Logstash. Installation instructions here.
I have an nginx access.log file that I’m interested in taking a look at for common patterns and trends. The flow and interaction of each of these tools in the stack looks like:
First I configured Logstash to define a simple pipeline for ingesting my log files – at this point I haven’t configured any filtering – this is in pipeline1.conf:
input { beats { port => "5043" } } # The filter part of this file is commented out to indicate that it is # optional. # filter { # # } output { elasticsearch { hosts => [ "192.168.1.94:9200" ] } }
This defines the incoming log data from Filebeat and outputing the result to Elasticsearch.
Start up Logstash with:
sudo bin/logstash --path.settings=/etc/logstash/ -f pipeline1.conf --config.reload.automatic
Now to configure Filebeat to push my nginx access.log into Logstash:
filebeat.prospectors: - type: log enabled: true paths: - /home/kev/vps1-logs/access.log output.logstash: hosts: ["192.168.1.94:5043"]
Start filebeat to upload the nginx access.log file (this is a local copy of the acess.log from my nginx server, but I understand you can use filebeat to transfer updates from your server to you ELK server on the fly):
/usr/share/filebeat/bin$ sudo ./filebeat -e -c /etc/filebeat/filebeat.yml -d "publish"
After the data had transferred, hitting the kibana site now I can start querying my log data from http://192.168.1.94:5601/
It looks like I might have some work to do to better tag my log data, but with the data imported, time to start checking out the query syntax.
After installing gitlab from the omnibus install, use the gitlab-ctl command to query status, and start/stop the gitlab service (see here):
$ sudo gitlab-ctl status
If gitlab’s main service has been disabled, all the sub-services will report ‘runsv not running’:
fail: gitaly: runsv not running
You can reset the main service to run at startup with (see here):
sudo systemctl enable gitlab-runsvdir.service
To disable startup at boot:
sudo systemctl disable gitlab-runsvdir.service
If runsvdir is not enabled to start at boot, then start with:
sudo systemctl start gitlab-runsvdir.service
To start/stop gitlab:
$ sudo gitlab-ctl start $ sudo gitlab-ctl stop