Building and running a Packet Radio Winlink solution in a Docker container, on a Raspberry Pi

Running Packet Radio apps in a Docker container, on a Raspberry Pi? Are you mad I hear you ask?Isn’t it hard enough to get ax25 and Packet Radio up and running on the Pi anyway? Having done this a few times already, this was my thinking, and I had the crazy idea that encapsulating most of the config and setup in Dockerfiles to build preconfigured containers might be an idea worth exploring.

Installing and configuring ax25 for the Raspberry Pi and Winlink clients that use ax25 like paclink-unix or PAT can be done and work well, but the steps, as for example documented in this comprehensive guide for building and installing paclink-unix which span several pages of instructions – this can be daunting even for those more familiar with building and installing apps from source on Linux.

Since the steps are well documented, I wondered if they could be captured in a Dockerfile to automate building a self-contained and ready to run Docker container.

tldr; The short story

I did eventually did get this working building ax25 from source and using Pat, but it took me down a rabbit hole for several hours. Skip to the end if you just want to find out how to build and run the completed Docker containers.

The Longer Explanation

I could not get ax25 to work self-contained in it’s own Docker container, as I ran into issues either accessing my serial device connected to my TNC Pi from inside the Container, and/or creating an ax0 interface when running kissattach.

If you expose the serial port on the Raspberry Pi to the Container running paclink-unix:

docker run -it --device=/dev/ttyAMA0 rpi-paclink

… When trying kissattach in the container it gives:

kissattach: Error setting line discipline: TIOCSETD: Operation not permitted

Are you sure you have enabled MKISS support in the kernel

or, if you made it a module, that the module is loaded?

Alternatively, starting with –privileged:

docker run -it --privileged  -v /dev/ttyAMA0:/dev/ttyAMA0  rpi-paclink

gives:

sudo kissattach /dev/ttyAMA0 1

kissattach: SIOCSIFMTU: No such device

I was initially trying to get this working because I wanted to run paclink-unix for Winlink email. Part of this app when you run the make script it will create wl2kserial and wl2ktelnet, but not wl2kax25. I had already run into this before, as it seems it doesn’t compile unless it has a later version of the ax25 stack compiled from source.

I changed gears and looked for how you could share an up and running ax25 stack from the Docker host, and it turns out this is easy to do, you just pass the –network=host param, and then ax0 appears in your network interfaces in your container.

The next issue I ran into is that configuring postfix as your email transport take some effort. bazaudi.com have a very detailed set of instructions, but I couldn’t get it working for outgoing email. It was working for incoming via wl2ktelnet and wl2kax25, but only for receiving emails and not sending. Time to try something else.

Installing and configuring Pat in a Container

I tried to get Pat working once before – I think I had it working on either a Debian or Ubuntu box, but couldn’t get it working on Raspbian on a Pi. I decided to try it again in this setup, and reusing the base image with ax25 already compiled from source, it was actually very easy to get Pat up and running.

This is dependent on having ax25 installed and configure on the host Pi OS, and the shared to the container with –network=host. I know, this seems redundant, but this is the only way I managed to get this working.

My base image for Raspbian including ax25 built form source is here: https://github.com/kevinhooke/DockerRPiAX25FromSource 

To build the image passing in the parameterized value for your callsign (passing your callsign in place of ‘yourcall’):

docker build --build-param MYCALL=yourcall -t rpi-ax25 .

Next build an image containing Pat, based on the image we just built – the source for this Dockerfile is here: https://github.com/kevinhooke/DockerRPiPATWinlink.git

Build this image with:

docker build --build-arg MYCALL=yourcall --build-arg MYCALLSSID=yourcall-1 
    --build-arg MYLOC=AA11aa --build-arg WINLINKPASS=yourwlpass 
    -t rpi-wl-pat .

Now to start it up remember we’re relying on an ax25 connection from the host, and we’re going to share it with the guest container. My TNC-Pi board connected to my Raspberry Pi is available on serial device as /dev/ttyAMA0, so I start up my ax0 port like:

sudo kissattach /dev/ttyAMA0 1 10.1.1.1

Next, run the container as a daemon, share the host networking, and expose port 8080 so we can access the Pat webapp:

docker run -d –network=host -p 8080:8080 rpi-wl-pat

Now let’s fire up the webapp:

Looks good, this is the Pat inbox. Let’s send a test email to myself – this is going to be sent using Packet over 2m VHF via my local Winlink gateway, AG6QO-10. I have this preconfigured in my Pat config file. You can configure this yourself before creating the rpi-wl-pat image:

Remember the Pat webapp that we’re interacting with here is running in a Docker container, on a Raspberry Pi. I just happen to be accessing it remotely from my Mac. For mobile operation or out in the field, you could attach a touchscreen to the Pi and connect a keyboard and mouse too.

To send my email over RF to the Winlink gateway, click Action, then Connect:

In the Pat status window we now see a log of the Packet exchange between my station and AG6QO-10 via BERR37:

A few seconds later the email arrives in my gmail inbox:

If I reply to the email in gmail, it will go back over the Winlink network, and be waiting for me when I connect to the Winlink gateway again over RF. Let’s give that a go in Pat – select Action and Connect, we connect to AG6QO0-10 again over 2m VHF, and now the reply is in my inbox in Pat:

Success!

Building ARM Docker images on the Raspberry Pi

Install Docker for ARM using the install script:

curl -sSL https://get.docker.com | sh

From: https://www.raspberrypi.org/blog/docker-comes-to-raspberry-pi/

Set to startup as a service:

sudo systemctl enable docker

Start the service manually now (or reboot to start automatically):

sudo systemctl start docker

Add user to docker group (to run docker cli without sudo):

sudo usermod -aG docker pi

 

To create a new image from a Raspbian base for ARM, use the Raspbian images from resin (in your Dockerfile):

FROM resin/rpi-raspbian:latest

From: http://blog.alexellis.io/getting-started-with-docker-on-raspberry-pi/

Edit your Dockerfile to include and configure whatever you need, and build an image as normal on the Pi using:

docker build -t tagname .

… and off you go.

Building an Amateur Radio Packet to Twitter bridge: Part 3 – preventing duplicate tweets

In my last post (part 2, also see part 1 here), I talked about Twitter’s API rate limits. Since many Packet Radio transmissions are duplicates by their nature, for example, beacon packets and ID packets, it’s important to have some kind of mechanism to prevent sending these through to Twitter.

The approach I used was to insert each received packet into a MongoDB database, storing the received packet data, who the packet was from and who to, and additional metadata about the packet, for example, when last sent, and when it was last received.

Here’s an example of what each document stored looks like:

{
 "_id" : ObjectId("5909828e5a2f130fc8039882"),
 "firstHeard" : ISODate("2017-05-03T07:11:10.051Z"),
 "from" : "AE6OR ",
 "heardTimes" : 1,
 "infoString" : "Š¤¤@à–¤ˆŽ@@à–„Š¤¤@ஞžˆ²@`–„Š¨@`¨‚žŠ@a\u0003ðHello from 5W Garage packet node AUBURN 73's Steli !\r",
 "lastHeard" : ISODate("2017-05-03T07:11:10.051Z"),
 "lastTweet" : ISODate("2017-05-03T07:11:10.051Z"),
 "to" : "BEACON",
 "tweet" : "Received station: AE6OR  sending to: BEACON : Š¤¤@à–¤ˆŽ@@à–„Š¤¤@ஞžˆ²@`–„Š¨@`¨‚žŠ@a\u0003ðHello from 5W Garage packet node AUBURN 73's Steli !\r"
}

My current logic to check for duplicates and record when a tweet is last sent is:

    1. search for a matching document (tweet details) with a $lt condition that lastTweet is before ‘now – 12 hours’:
      document.lastTweet = {'$lt' : moment().utc().subtract(12, 'hours').toDate() };
    2. This is executed as a findOne() query:
      db.collection('tweets').findOne(document).then(function (olderResult) {
         ...
      }
    3. If an existing document older than 12 hours is found, then update properties to record that this same packet was seen now, and the time is was resent was also now (we’ll resend it to the Twitter api after the db updates):
      if (olderResult != null) {
          sendTweet = true;
          updateProperties = {
              lastHeard: now,
              lastTweet: now
          };
      }
      else {
          updateProperties = {
              lastHeard: now
          };
      }

      If not older than 12 hours, set properties to be updated to indicate just the lastHeard property

    4. To either update the existing document or insert a new document if this was the first time this packet was heard, do an ‘upsert’:
      db.collection('tweets').update(
          document,
          {
              $set: updateProperties,
              $inc: {heardTimes: 1},
              $setOnInsert: {
                  firstHeard: now,
                  lastTweet: now
              }
          },
          {upsert: true}
      ).then(function (response) {
      ...
      }
    5. Depending on the result indicating if we inserted or updated, set a property so on return we know whether to send a new Tweet or not:
      if(response.upserted != null || sendTweet) {
          response.sendTweet = true;
      }
      else{
          response.sendTweet = false;
      }

The approach is not yet completely foolproof, but it is stopping the majority of duplicate Tweets sent to Twitter so far.

For the full source check the project on github here: https://github.com/kevinhooke/PacketRadioToTwitter .

Building an Amateur Radio packet to Twitter bridge: Part 2 – Understanding Twitter’s API rate limits

Quick update on my project to build an Amateur Radio Packet to Twitter bridge (see here for the first post).

I’ve moved the node.js code to a Raspberry Pi 3. So far I have up and running:

Twitter provides an incredibly useful REST based API to integrate with their service, but understandably your usage is ‘rate limited’ so you’re not retrieving vast amounts of data or flooding the service with spam Tweets.

To understand their rate limit approach, see this article: https://dev.twitter.com/rest/public/rate-limiting

For limits on POSTs, for example for creating new Tweets: https://support.twitter.com/articles/15364

Per above article, POSTs are limited 2,400 per day, or 100/hour. This means if you’re app could potentially use the POST apis more than 100 per hour, you need to build in some limiting logic in your app so you don’t exceed this threshold.

The rate limits for the /GET REST apis are here, although for my application I’m not relying heavily on GETs, the majority of the calls are POSTS:

https://dev.twitter.com/rest/public/rate-limits

Twitter’s api prevents you from posting duplicate Tweets (see here), although they don’t publish how much time needs to elapse before an identical post is no longer consider a duplicate (see here). This wasn’t something I was planning on handling, but given the repetitive nature of some packet transmissions, for example BEACON packets, and packets sent for ID, this has to be a necessary feature of my app, otherwise in the worst case it would be attempting to tweet a stations BEACON packet every minute continuously.

To handle this I’m storing each Tweet sent, with additional tracking stats, like time last Tweet sent. Using this time stamp I can then calculate the elapsed time between now and the time last sent to make sure enough time has elapsed before I attempt to Tweet the packet again.

This is still a work in progress and there’s still some tweeking to do on the duplicate post detection and rate limiting, but while testing you can see the Tweets sent from my Twitter account @KK6DCT_Packet.

Here’s an example of the packets currently being Tweeted:

MongoDB on the Raspberry Pi

I looked at running MongoDB on the Pi several months back, and while you can download the latest source and compile it yourself which takes a while, there didn’t seem to be any prebuilt binaries (at least not when I last looked).

It seems the 32bit version for ARM is now in the Rasbian repos, so you can just do (from here):

sudo apt-get install mongodb-server

and get it installed and setup ready to go.

If you’re looking for a later 64bit version then some people have rebuilt these for you. Take a look at here, or for more info, also try Andy Felong’s posts, e.g. here and here.

Computer History Documentaries – part 2

It’s been a while since I posted this list of some of the computer history documentaries and dramas that I’ve found most interesting, so I have a few more to recommend and add to the list:

  • Silicon Cowboys – fascinating documentary about Compaq, the development of their luggable PC, and their impact on the development of the PC Compatible market. (4/27/17 – this is currently on Netflix)
  • Bedrooms to Billions – The Amiga Years : incredibly well put together indie documentary about the Amiga
  • Bedrooms to Billions: documentary about the development of the home computer games industry in the UK and Europe. Includes many interviews with the original developers and many involved in the industry at the time. If you had any interest in computer games around the mid to late 80s in the UK, this is a must watch
  • Beep – documentary about sound and music development for computer games
  • Get Lamp – documentary by Jason Scott, covering text based computer adventure games
Nicola Caulfield & Anthony Caulfield (who produced the Bedrooms to Billions documentaries) currently have a new documentary called ‘The Playstation Revolution’ that just reached it’s funding goal on Kickstarter, but if you’d like to back it you can back via MegaFounder (linked from the Late Backer link on the Kickstarter page)

Updating/installing node.js on the Raspberry Pi

The latest versions of Raspbian (e.g. Jessie) come with an older version of node.js preinstalled. If you search around for how to install node.js on the Pi you’ll find a number of different approaches, as it seems there’s not an official ARM compiled version of the latest releases in the Debian repos.

This approach provided by this project has later versions compiled for ARM. Follow the instructions on their site to download and install from the .deb file.

Before you start, if you already have an older version installed (check ‘node -v’), uninstall it first. The version I had on my fresh Jessie install was from nodejs-legacy, so ‘sudo apt-get remove nodejs-legacy’ did the trick.

Amateur Radio homebrew: Raspberry Pi + Packet Radio + social networking integration

I’m putting something together for our River City Amateur Radio Comms Society homebrew show n tell later this year. Here’s my ingredients so far:

I’m thinking of building a bridge between Amateur Radio Packet and social networking, like Twitter.

So far I’ve roughed out node.js to Twitter integration using node-oauth, and I I’ve put together a simple prototype using the node-ax25 library to connect to the KISS virtual TNC on Direwolf. It receives packets and writes callsigns and messages to the console.

Right now I’m testing this on a PC running Debian, with a Rigblaster Plug n Play connected to an Icom 880h. Later when my TNC-Pi arrives I’ll migrate this to the Pi.

So far using the node-ax25 library looks pretty easy. Here’s some code so far to dump received callsigns to the console:

var ax25 = require("ax25");

var tnc = new ax25.kissTNC(
    {   serialPort : "/dev/pts/1",
        baudRate : 9600
    }
);

tnc.on("frame",
    function(frame) {
        //console.log("Received AX.25 frame: " + frame);
        var packet = new ax25.Packet({ 'frame' : frame });
        console.log(new Date() + "From "
            + formatCallsign(packet.sourceCallsign, packet.sourceSSID)
            + " to "
            + formatCallsign(packet.destinationCallsign, packet.destinationSSID));

        if(packet.infoString !=""){
            console.log(">  " + packet.infoString);
        }
    }
);

/**
 * Formats a callsign optionally including the ssid if present
 */
function formatCallsign(callsign, ssid){
    var formattedCallsign = "";
    if(ssid == "" || ssid == "0"){
         formattedCallsign = callsign;
    }
    else{
        formattedCallsign = callsign + "-" + ssid;
    }

   return formattedCallsign;
}

The output for  received messages so far looks like this:

Wed Apr 19 2017 23:10:00 GMT-0700 (PDT)From KBERR  to KJ6NKR

Wed Apr 19 2017 23:12:05 GMT-0700 (PDT)From AE6OR  to BEACON

>  Š¤¤@à–¤ˆŽ@@`–„Š¤¤@`®žžˆ²@`–„Š¨@`¨‚žŠ@aðHello from 5W Garage packet node AUBURN 73's Steli !

Wed Apr 19 2017 23:12:08 GMT-0700 (PDT)From AE6OR  to BEACON

>  Š¤¤@à–¤ˆŽ@@à–„Š¤¤@ஞžˆ²@`–„Š¨@`¨‚žŠ@aðHello from 5W Garage packet node AUBURN 73's Steli !

Wed Apr 19 2017 23:16:49 GMT-0700 (PDT)From K6JAC -6 to ID    

>  K6JAC-6/R BBOX/B KBERR/N

Wed Apr 19 2017 23:19:31 GMT-0700 (PDT)From K6WLS -4 to ID    

>  Network Node (KWDLD)

Wed Apr 19 2017 23:19:50 GMT-0700 (PDT)From NM3S   to BEACON

>  Mike in South Sac.  Please feel free to leave a message. 73's

Calling Twitter REST api from JavaScript with OAUTH

I’ve started on a project where I need to call Twitter’s REST apis from a Node.js JavaScript app. I’ve built a Java app before that integrated with Twitter, but used a library to help with the OAUTH authentication. Looking around for a JavaScript library, it looks like node-oauth does what I need to do, so gave it a go.

Twitter’s API docs are pretty good, but I ran into an issue with node-oauth with an error coming back from:

POST /statuses/update.json

which returned this message:

{ statusCode: 401,
 data: '{"errors":[{"code":32,"message":"Could not authenticate you."}]}' }

which is odd because using the node-oauth library to authenticate and then call any of the GET apis with node-oauth was working fine. If you Google this error there’s numerous posts about this error for all number of reasons, so I’m not sure it’s a particularly specific message.

Here’s a working example for a GET:

First, use node-oauth to authenticate with your Twitter key, secret, and app access token and secret (which you can set up here):

var oauth = new OAuth.OAuth(
    'https://api.twitter.com/oauth/request_token',
    'https://api.twitter.com/oauth/access_token',
    config.twitterConsumerKey,
    config.twitterSecretKey,
    '1.0A',
    null,
    'HMAC-SHA1'
);

Next, using the returned value from this call, make the GET request (it builds the request including the OAUTH headers for you):

//GET /search/tweets.json
oauth.get(
    'https://api.twitter.com/1.1/search/tweets.json?q=%40twitterapi',
    config.accessToken,
    config.accessTokenSecret,
    function (e, data, res){
        if (e) console.error(e);
        console.log(data);
    });

Attempting a POST to update the status for this account using the similar api:

var status = "{'status':'test 3 from nodejs'}";

oauth.post('https://api.twitter.com/1.1/statuses/update.json',
    config.accessToken,
    config.accessTokenSecret,
    status,
    function(error, data) {
        console.log('\nPOST status:\n');
        console.log(error || data);
});

And this is the call that returned the error about “could not authenticate”.

Looking through a few other tickets for the node-oauth project, this one gave a clue – this particular issue was about special chars in the body content to the POST, but it gave an example of the body formed like this:

var status = ({'status':'test from nodejs'});

I would have thought passing a String would have worked, but I guess the api is expecting an object? Anyway, this is working, and looks good so far.