River City Amateur Radio Communications Society weekly SSTV net (06/21/17)

The River City Amateur Radio Communications Society in Sacramento runs a weekly SSTV net Wednesdays at 9pm local time (following the 2m net on the 2m N6NA repeater, and the 10m net) – I’ve tried to receive the pictures before but on 2m simplex between most of the stations in Sacramento area and out to my QTH in Davis, it’s a bit far to get a good copy, and some of the stations I can’t copy at all.

This week we tried something different and ran the net on the club’s 440MHz repeater. This worked great for me as we’ve got great coverage from this repeater over Sacramento area and surrounding area.

This was my first time to actively check-in on the net so I had a few things to learn on the fly! First, Multiscan 3B, what seems like one of the most common SSTV apps for the Mac, doesn’t seem to run reliably on current OS X 10.12.x versions. Last time I tried to use it I didn’t have any issue, but with the most recent MacOS version it would only start up the first time after it was installed, and every other time it crashed.

 

The first couple of pictures I received I realized I was receiving through the built in mic, and wasn’t even receiving via my Rigblaster interface. Understandably these first few pics were pretty terrible:

Part way through the net I switched to installing MMSSTV on Windows 10 running under Parallels on my Mac. My connection to my radio is through a Rigblaster, so I had to attach the Rigblaster input and output USB device to my Windows 10 guest. Once I configured it to receive and send through my Rigblaster interface, now I was receiving great images from the other ops on the net, and managed to send and get good reports on a couple of pictures myself:

 

 

 

 

 

 

 

Now I’ve got my config setup, I’m looking forward to our next SSTV net!

Never assume you know how something works by only observing its external behavior

As a developer, you should never assume you understand how something works solely by observing what it does. This is especially true if you are trying to fix something and your only understanding of the issue is only the behavior that you can observe.

While you don’t have to understand how something works in order to use it, if you’re trying to fix something, especially software, it helps to understand how something works. The reason is what you observe externally as a problem is usually only a symptom of the problem; it’s rarely the actual problem itself.

Let me give you an extremely simplified example. Let’s say you have an electric car, but you’ve no idea how the electric motor drivetrain works, you just know you press the accelerator pedal and it goes. One morning you get in the car and press the pedal and nothing happens. In diagnosing the issue, the only thing you consider is the external symptoms that you can see: you press the pedal and it doesn’t go. An extremely naive conclusion you could make is that the accelerator pedal is broken (!). So you replace the pedal, but then you’re surprised to find that it still doesn’t work (ok, so this is a contrived example to make the point – if you know enough to be able to replace the accelerator pedal, you probably know enough about how the car works to not assume the pedal is broken!)

As a software developer or architect, as you diagnose issues you should always look under the covers and find out more about what’s actually going on. The problem you’re looking for is rarely the symptom that you can actually see (or what the user sees).

Installing El Capitan on my 2008 Mac Pro

My 2008 Mac Pro arrived, and it’s a shiny beast of a machine ūüôā ¬†It’s sitting beside a younger relative, a 2002 Power Mac G4 Quicksilver.

It came with OS X 10.5 Leopard installed – it looked like it was a clean install, but as for any used machine, I like to do a clean install so I know what I’m starting with. Downloading OS 10.11 El Capitan from the Apple Store from my MacBook Pro, I created a bootable USB flash drive using the steps described here.

On my first attempt to install, it looked like after about 20 mins of install when it attempted to reboot for the first time, I had a blank screen and no activity. Rebooted back to the USB flash drive and started the Disk Utils, the drive checked clean and everything was good, but there wasn’t a bootable partition.

On the second attempt, I think what had happened the first time was the powersave settings had kicked in and the monitor output had turned off, but it wasn’t waking from keyboard or mouse input. The second time the screen turned back on, and the installer was stuck at ‘about a second remaining’,

but pressing Cmd-L to see the installer logs, there was a huge amount of errors scrolling by to do with TSplicedFont and Noto fonts. This seems to be a common issue with El Capitan, as described here. Ignoring the errors and waiting it out though, after about 20 mins stuck at ‘about a second remaining’ it did reboot and the installation continued as expected.

After successfully completing the install, it started up successfully, and after walking through the installation dialogs to select language preferences and create an account, I was up and running with 10.11 El Cap.

First impressions: for a 9 year old computer, this thing is pretty snappy. It’s comparable to my 2012 MacBook Pro with an i7 in responsiveness, although from only having a regular mechanical HDD, it could be faster booting and loading apps, but it’s definitely acceptable. For a desktop daily driver, it’s definitely perfectly usable. The dual Xeon 2.8GHz CPUs are holding their own, I haven’t seen anything beyond 5% to 6% CPU usage from using Chrome and browsing the web with about 20 or so tabs open. Where I think I might start to suffer though is this machine only came with 4GB RAM. With my current Chrome usage it’s eating up about 3GB so I have some to spare, but the interesting thing about these Mac Pros is the expandability – the 2008 will support 32GB per specs and 64GB unofficially. I bet if I put in 16GB or so I would get a much better experience. Time to plan the upgrades ūüôā

 

Retro collection just acquired a more recent, not-so-retro, addition (2008 Mac Pro 8 core)

Having grown up with 8 bit computers, starting with an Atari VCS and then a Sinclair ZX Spectrum, I find it fascinating that decades later there’s an increasing level of interest in computers from the 80s and 90s with thriving online communities, podcasts and even meetup groups of enthusiasts who get together to discuss the original hardware and also new device add-ons, blending modern tech (e.g. using SD cards for storage) with old.

The ZX Spectrum recently has a number of modern remakes:

As much as I really wanted to get a ZX Spectrum Next, I couldn’t bring myself to put down 175UKP for the base model. I suspect I might come back and pick one up at some point.

My other favorite computer was an Atari ST, I had an 520STFM. I picked up an 1040STF with an Atari monitor on eBay¬†a while back, and it sits on my desk in my office. I also picked up an CosmosEx device which is an interesting example of current tech complementing old – it’s a Raspberry Pi based device that provides SD card support for floppy disk and hard disk images, as well as USB keyboard and mouse support, and also networking.

Something that’s interested me for a while is what it looks like to browse the web using old hardware. The short story is that it’s generally a terrible experience (slow, and current web technologies are poorly supported, if at all). I’ve tried setting up CAB¬†on my ST, but with only 1MB RAM it can’t load anything but the simplest HTML page with text and 1 or 2 images before it fails from not enough memory.

For a while I browsed eBay looking to pick up a used Atari Falcon, but for a 25 year old 16/32 bit computer, it’s incredible that they typically go for anything about $800 to $1000 if you can even find one (they cost 599 UKP new when the launched). With it’s 68030, it has significantly more grunt than the original 68000 based STs.

I then got distracted by the idea of picking up a modern remake of an ST – the Coldfire project has developed the Firebee, which uses a 264MHz Coldfire processor and 512MB RAM, with 68000 backwards compatibility, but with the addition of modern hardware features like USB, PCI expansion slots, ethernet networking, and many of features we currently take for granted in current devices. Despite torturing myself by watching every Firebee video on YouTube, the current price of a new Firebee of 560 Euros is a little more than I can justify to buy a modern Atari ST in 2017 (despite how awesome it actually is).

Continuing with my (odd) interest of browsing the web on old hardware, I picked up a Power Mac G4 2002 Quicksilver.  Classilla in OS9 is perfectly usable and TenFourFox in Mac OS 10.4 is ok, but (at least on my single cpu G4) not really good enough for a daily driver (scrolling is sluggish).

I very nearly decided to up the horsepower and look for a dual G5 Mac Power Mac,

but noticed the price started to get close to what you could pick up a used Intel Xeon Mac Pro for, so … long story short, I just picked up a 2008 8 core Mac Pro on eBay. Super excited for when it arrives!

Deploying Docker Containers to AWS EC2 Container Service (ECS)

I’ve spent a lot of time playing with Docker containers locally for various personal projects, but haven’t spent much time deploying them to the cloud. I did look at IBM Bluemix a while back, and their¬†web console and toolset was a pretty good developer experience. I’m curious about how OpenShift Online is evolving into a container based service as I’ve deployed many personal projects to OpenShift, and it has to be my favorite PaaS for features, ease of use, and cost.

AWS is the obvious leader in this space, and despite playing with a few EC2 services during the developer free year, I hadn’t tried yet to deploy Docker Containers there.

AWS’s Docker support is EC2 Container Service, or ECS.

To get started:

Install the AWS CLI: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html

On the Mac I installed this easily with ‘brew install awscli’, which was much simpler than installing Python and PIP per the official instructions (see here).

Create an AWS user in AWS IAM for authenticating between your local Docker install and with ECS (this user is used instead of your master Amazon account credentials).

Run ‘aws configure’ locally and add secret key credentials from when you created your admin user in IAM

Follow through the step in the ECS Getting Stared guide here: https://console.aws.amazon.com/ecs/home?region=us-east-1#/firstRun

To summarize the steps in the getting started guide:

  • From the ECS Control Panel, create a Docker Image Repository:¬†https://console.aws.amazon.com/ecs/home?region=us-east-1#/repositories
  • Connect your local Docker client with your Docker credentials in ECS:
    aws ecr get-login --region us-east-1
  • Copy and paste the docker login command from the previous step, this will log you in for 24 hours
  • Tag your¬†image locally ready to push to your ECS repository – use the repo URI from the first step:
docker tag imageid ecs-repo-uri

The example command in the docs looks like this:

docker tag e9ae3c220b23 aws_account_id.dkr.ecr.region.amazonaws.com/repository-name

For the last param, the tag name, use the ECS Docker Repo URI when you created the repo.

Push the image to your ECS repo with (where image-tag-name is the same as the tag name above):

docker push image-tag-name

Docker images are run on ECS using a task config. You can create with the web ui (https://console.aws.amazon.com/ecs/home?region=us-east-1#/taskDefinitions), or manually create as a json file. If you create from the web ui you can copy the json from the configured task as a template for another task.

Before you can run a task, you need to create a Cluster, using the web ui: https://console.aws.amazon.com/ecs/home?region=us-east-1#/clusters

Run your task specifying the EC2 cluster to run on:

aws ecs run-task –task-definition¬†task-def-name –cluster cluster-name

If you omit the –cluster param, you’ll see this error

Error: "An error occurred (ClusterNotFoundException) when calling the RunTask operation: Cluster not found."

To check cluster status:

aws ecs describe-clusters –cluster cluster-name

Ensure you have an inbound rule on your EC2 security to allow incoming requests to the exposed port on your container (e.g. TCP 80 for incoming web traffic).

Next up: deploying a single container is not particularly useful. Next I’m going to take a look at adding Netflix Eureka for discovery of other deployed services in containers.

Checklist for accessing an AWS EC2 instance with ssh

Quick checklist of items to check for enabling ssh instance into a running EC2 instance:

  • EC2 instance is started (check from AWS console)
  • From AWS console, check Security Group for the instance has an inbound rule for SSH – if only accessing remotely from your current IP, you can press ‘My IP’ to set your current public IP
  • From Network & Security, create a keypair and download the .pem file
  • Check the public DNS name for your EC2 instance from the¬†console
  • chmod 400 your .pem file, otherwise you’ll get an error that it’s publicly readable

Connect with:

ssh -i path-to-.pem-file ec2-user@ec2-your-instance-name.compute-xyz.amazonaws.com

Docker usage notes – continued (2)

A few rough usage notes, continuing from my first post a while back.

Delete all containers:

Run a container from an image and delete it when it exits: use –rm:

docker run -it --rm containerid

 

Pass an argument into a build:

  • use ‘ARG argname’ to declare the argument in your Dockerfile
  • pass value for argname with –build-arg argname=value

Test during build if an arg was passed:

RUN test -n "$argname"

 

Building and running a Packet Radio Winlink solution in a Docker container, on a Raspberry Pi

Running Packet Radio apps in a Docker container, on a Raspberry Pi? Are you mad I hear you ask?Isn’t it hard enough to get ax25 and Packet Radio up and running on the Pi anyway? Having done this a few times already, this was my thinking, and I had the crazy idea that encapsulating most of the config and setup in Dockerfiles to build preconfigured containers might be an idea worth exploring.

Installing and configuring ax25 for the Raspberry Pi and Winlink clients that use ax25 like paclink-unix or PAT can be done and work well, but the steps, as for example documented in this comprehensive guide for building and installing paclink-unix which span several pages of instructions Рthis can be daunting even for those more familiar with building and installing apps from source on Linux.

Since the steps are well documented, I wondered if they could be captured in a Dockerfile to automate building a self-contained and ready to run Docker container.

tldr; The short story

I did eventually did get this working building ax25 from source and using Pat, but it took me down a rabbit hole for several hours. Skip to the end if you just want to find out how to build and run the completed Docker containers.

The Longer Explanation

I could not get ax25 to work self-contained in it’s own Docker container, as I ran into issues either accessing my serial device connected to my TNC Pi from inside the Container, and/or creating an ax0 interface when running kissattach.

If you expose the serial port on the Raspberry Pi to the Container running paclink-unix:

docker run -it --device=/dev/ttyAMA0 rpi-paclink

… When trying kissattach in the container it gives:

kissattach: Error setting line discipline: TIOCSETD: Operation not permitted

Are you sure you have enabled MKISS support in the kernel

or, if you made it a module, that the module is loaded?

Alternatively, starting with –privileged:

docker run -it --privileged  -v /dev/ttyAMA0:/dev/ttyAMA0  rpi-paclink

gives:

sudo kissattach /dev/ttyAMA0 1

kissattach: SIOCSIFMTU: No such device

I was initially trying to get this working because I wanted to run paclink-unix for Winlink email. Part of this app when you run the make script it will create wl2kserial and wl2ktelnet, but not wl2kax25. I had already run into this before, as it seems it doesn’t compile unless it has a later version of the ax25 stack compiled from source.

I changed gears and looked for how you could share an up and running ax25 stack from the Docker host, and it turns out this is easy to do, you just pass the –network=host param, and then ax0 appears in your network interfaces in your container.

The next issue I ran into is that configuring postfix as your email transport take some effort. bazaudi.com have a very detailed set of instructions, but I couldn’t get it working for outgoing email. It was working for incoming via wl2ktelnet and wl2kax25, but only for receiving emails and not sending. Time to try something else.

Installing and configuring Pat in a Container

I tried to get Pat working once before – I think I had it working on either a Debian or Ubuntu box, but couldn’t get it working on Raspbian on a Pi. I decided to try it again in this setup, and reusing the base image with ax25 already compiled from source, it was actually very easy to get Pat up and running.

This is dependent on having ax25 installed and configure on the host Pi OS, and the shared to the container with –network=host. I know, this seems redundant, but this is the only way I managed to get this working.

My base image for Raspbian including ax25 built form source is here: https://github.com/kevinhooke/DockerRPiAX25FromSource 

To build the image passing in the parameterized value for your callsign (passing your callsign in place of ‘yourcall’):

docker build --build-param MYCALL=yourcall -t rpi-ax25 .

Next build an image containing Pat, based on the image we just built Рthe source for this Dockerfile is here: https://github.com/kevinhooke/DockerRPiPATWinlink.git

Build this image with:

docker build --build-arg MYCALL=yourcall --build-arg MYCALLSSID=yourcall-1 
    --build-arg MYLOC=AA11aa --build-arg WINLINKPASS=yourwlpass 
    -t rpi-wl-pat .

Now to start it up remember we’re relying on an ax25 connection from the host, and we’re going to share it with the guest container. My TNC-Pi board connected to my Raspberry Pi is available on serial device as /dev/ttyAMA0, so I start up my ax0 port like:

sudo kissattach /dev/ttyAMA0 1 10.1.1.1

Next, run the container as a daemon, share the host networking, and expose port 8080 so we can access the Pat webapp:

docker run -d –network=host -p 8080:8080 rpi-wl-pat

Now let’s fire up the webapp:

Looks good, this is the Pat inbox. Let’s send a test email to myself – this is going to be sent using¬†Packet over 2m VHF via¬†my local Winlink gateway, AG6QO-10. I have this preconfigured in my Pat config file. You can configure this yourself before creating the rpi-wl-pat image:

Remember the Pat webapp that we’re interacting with here is running in a Docker container, on a Raspberry Pi. I just happen to be accessing it remotely from my Mac. For mobile operation or out in the field, you could attach a touchscreen to the Pi and connect a keyboard and mouse too.

To send my email over RF to the Winlink gateway, click Action, then Connect:

In the Pat status window we now see a log of the Packet exchange between my station and AG6QO-10 via BERR37:

A few seconds later the email arrives in my gmail inbox:

If I reply to the email in gmail, it will go back over the Winlink network, and be waiting for me when I connect to the Winlink gateway again over RF. Let’s give that a go in Pat – select Action and Connect, we connect to AG6QO0-10 again over 2m VHF, and now the reply is in my inbox in Pat:

Success!