Installing rtl-sdr and dump1090 on a Raspberry Pi to receive ADS-B signals

I’ve gone through these steps a couple of times when I’ve set up a new SD card, and had to go to various places to work out all the steps, so in case this is useful for someone else, here’s the steps (assuming installing on Rasbian):

Making and installing rtl-sdr from source

Instructions: http://sdr.osmocom.org/trac/wiki/rtl-sdr

Pre-req steps, if you don’t already have the following:

sudo apt-get install cmake

#usb driver for the rtl dongle
sudo apt-get install libusb-1.0

Get the source:

git clone git://git.osmocom.org/rtl-sdr.git

Build:

cd rtl-sdr/
mkdir build
cd build
cmake ../
make
sudo make install
sudo ldconfig

If you get permissions errors like this when using any of the rtl_* commands:

Using device 0: Terratec T Stick PLUS
usb_open error -3
Please fix the device permissions, e.g. by installing the udev rules file rtl-sdr.rules
Failed to open rtlsdr device #0.

Then you should be able to add a line to

/etc/udev/rules.d/rtl-sdr.rules

 to set up correct permissions for your specific card, which you can find by running lsusb, eg for mine:

Bus 001 Device 004: ID 0ccd:00d7 TerraTec Electronic GmbH

From this I believe you take the id value and insert it into a new line in rtl-sdr.rules like:

SUBSYSTEMS=="usb", ATTRS{idVendor}=="0ccd", ATTRS{idProduct}=="00d7", MODE:="0666"

and then restart udev:

sudo service udev restart

… reboot and that should be fixed.  Or you can still run the apps with sudo.

To test, try starting up the rtl_tcp server:

sudo rtl_tcp -a your_ip

and you might see a message about the device already in use by another kernel module:

Found 1 device(s):
  0:  Realtek, RTL2838UHIDIR, SN: 00000001
Using device 0: Terratec T Stick PLUS
Kernel driver is active, or device is claimed by second instance of librtlsdr.
In the first case, please either detach or blacklist the kernel module
(dvb_usb_rtl28xxu), or enable automatic detaching at compile time.
usb_claim_interface error -6
Failed to open rtlsdr device #0.

This is saying dvb_usb_rtl28xxu is already using the device. From instructions here, you can temporarily unload this module:

sudo rmmod dvb_usb_rtl28xxu

or permantly remove it with a blacklist entry in /etc/modprobe.d – add a new file here named something like rtl-sdr.conf, add add one line with the name of the above driver:

blacklist dvb_usb_rtl28xxu

Reboot and now you should be good to go with the rtl_* commands.

 

Making and installing dump1090:

From https://github.com/MalcolmRobb/dump1090

git clone https://github.com/MalcolmRobb/dump1090.git
cd dump1090
make

Run in interactive mode:

./dump1090 --interactive

or net mode to enable the webserver (point a browser at you Pi’s IP address and port 8080):

./dump1090 --net

(Page views: 58)

Recovering MondoDB from an unclean shutdown

If MongoDB is refusing to start up and you see this message:

**************
 old lock file: \data\db\mongod.lock.  probably means unclean shutdown,
 but there are no journal files to recover.
 this is likely human error or filesystem corruption.
 found 1 dbs.
 see: http://dochub.mongodb.org/core/repair for more information
 *************

… see the instructions pointed to by the suggested URL.

Try starting up with the –repair option. When it completes, restart the server process as normal.

If you have the ‘probably means unclean shutdown’ message, remove the mongod.lock file by hand and then restart with –repair

(Page views: 50)

SSLProtocolException in SE7 & SE8: “handshake alert: unrecognized_name”

Caused by: com.sun.jersey.api.client.ClientHandlerException: 
javax.net.ssl.SSLProtocolException: handshake alert:  
unrecognized_name at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle (URLConnectionClientHandler.java:151) 
at com.sun.jersey.api.client.Client.handle(Client.java:648) 
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:680) 
at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) 
at com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:5 68) 

This exception is from an SSL check in SE7 and above that checks that your SSL certificate matches your domain name. For development, if you’re using a self-signed SSL certificate for testing which does not match your domain name, you can turn off the check and ignore the error with:

-Djsse.enableSNIExtension=false

(Page views: 72)

WordPress Permalink formats

Having just moved my WordPress blog to OpenShift, it’s amazing how many settings apparently I had tweaked and customized on my previous site. I just followed a Google search link to one of my own posts and got a 404, and realized that the link format to my posts had changed.

I used to have links like /year/month/day/postname and now only URLs that looked like ?p=number. The first format is one of the Permalink formats. You can customize this in Settings/Permalinks. More info here.

(Page views: 79)

Another server drive failure – so migrated my WordPress blog to OpenShift

So it happened. Again. Although the last time was at least 5 years ago. A ton of i/o errors in syslog, and one of my drives in a RAID1 array is not responding. Strangely the other drive although still working is reporting in SMART that it is also about to die. That seems unlikely to have two drives go at the same time?

SMART overall-health self-assessment test result: FAILED!
Drive failure expected in less than 24 hours. SAVE ALL DATA.

Uh-oh.

So here’s the deal. The last time, roughly 5 years ago, I had a drive completely die and my Ubuntu server than I run from home wouldn’t boot. I lost several months of blog posts and notes. When I rebuilt it, I installed a pair of 250GB Hitachi Deskstar P7K500 drives in RAID1 configuration. The odd thing is that one of the drives kept dropping out of the array every few months in the last year, but it could be added back with the mdadm commands so I didn’t think much of it. Maybe I should have looked more closely in the logs what was going on.

So I’m at the decision point. Probably shouldn’t just replace one of the drives if one is alreay bad, should probably replace them both. Do I want to spend a couple of hundred on another pair of drives? At least the RAID array probably saved me from completely losing everything again.

I’ve run my own Linux server from home since about 2000. It’s physically changed motherboards a couple of times, its been a PIII and most recently a P4 with just 512MB of RAM and I’ve run JBoss versions from 3.x or so upto 5.x, Glassfish 3.x, I’ve run my blog on on my own custom app (BBWeblog), then Drupal, Joomla, and most recently WordPress on Apache. I’ve enjoyed running my own server since it means I can do whatever I like with it, and other than the cost for the hardware, there’s no hosting costs for my sites.

I think it’s reached the point though where enough is enough. Time to move online somewhere. Given that I’ve been using OpenShift for a number of projects at work, it seemed an easy choice to spin up a gear using the WordPress template and just import my site. And it only took a couple of minutes to get it up and running. I spent more time playing around with the WordPress Themes than I did actually setting it up and importing my site.

The only slightly tricky part was to update the DNS entry to point to OpenShift, which involved the following steps:

  • Update my record on zoneedit.com to delete the entry for my domain. I’ve been using ZoneEdit because they support Dynamic IP addresses by giving you a script to run locally to update what your actual IP address is periodically.
  • Update my GoDaddy account to remove the ZoneEdit DNS servers, switch from Custom DNS settings to Standard, click on DNS Zone File, and then add a CNAME record for ‘www’ pointing to my apps’s URL on OpenShift
  • Back to OpenShift, run ‘rhc add-alias wordpress www.kevinhooke.com’ where wordpress is the name of my app.

This post was useful, and here’s the docs for ‘rhc add-alias’

Done! It actually only took a couple of minutes for the changes to get reflected too, i.e. if I ping www.kevinhooke.com it’s now picking up the new IP for my OpenShift server on AWS.

I’m probably still going to play around with some customization options on WordPress, but  from start to finish it was probably less than an hour. It would have taken me far much longer than that to install new drives in my server, reinstall from a drive image, and get it all set up again. So, fingers crossed, here’s looking forward to my new home on OpenShift :-)

(Page views: 88)

Combining find and grep

Quick note to remember this syntax as every few months I find a need to do a grep within a number of files:

find -name "pattern" -exec grep "pattern" {} ;

Grep options:
-H print filename in results
-n print line number where match was found
-l limit match to first found match in file (useful if you want to find files containing a match but don’t care how many matches are in each file)

Pipe to wc -l to count file occurrences, eg:

find -name "pattern" -exec grep -l "pattern" {} ; | wc -l

Use egrep if you need to use regex in the pattern.

(Page views: 48)

Merging a new OpenShift project git repo with an existing local project repo

Steps mainly from here, assuming you already have a local git repo for an existing project and you want to merge it into a newly created project on OpenShift for deployment:

Add the remote openshift remote:

git remote add openshift_git_repo_url

Merge the content from the newly created repo into your existing project (this will be some pom.xml changes and the .openshift directory for remote server settings and triggers etc):

git merge openshift/master -s recursive -X ours

This didn’t do anything for me, it said ‘already up to date’, so I pulled down the files from the remote with:

git pull openshift master

which resulted in a couple of conflicts, like pom.xml. Update the files with conflicts to resolve them and commit the changes, then push back to the remote:

git push openshift master

Done!

(Page views: 93)