For no particular reason, a Sparc workstation is on it’s way

I was shopping for one of these on ebay:

https://en.wikipedia.org/wiki/SPARCstation_20

By Caroline Ford – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=1504020

But then got caught up on the idea that an Ultra 5 with IDE disk support might be a better idea:

https://en.wikipedia.org/wiki/Ultra_5/10

By Liftarn – https://commons.wikimedia.org/w/index.php?curid=2094130

After a lively discussion in the Facebook Vintage Unix Machines group about the pros and cons of older Sparcstations, Ultra 1 and 2, vs Ultra 5/10, I decided to shop for an Ultra 1 or 2. I made an offer on one but didn’t get it. And then I decided to go for an Ultra 60 since it was cheaper than anything else I could find, although in a unknown working condition, other than ‘it powers on’. So when it turns up it will be a learning experience to see if it’s actually in working condition or not.

I believe from photos from the ebay listing that there’s a SunPCI card in there, so that will be interesting to play with, and also the Creator 3D graphics card.

On my shopping list of needed parts:

  • a Sun Type 5 keyboard and mouse (with Sun mini DIN connector)
  • a 13W3 video to VGA adapter
  • an SCA SCSI disk
  • possible future purchase, a SCSI2SD adapter

MongoDB on MacOS Catalina 10.15: “exception in initAndListen: 29 Data directory /data/db not found”

By default, MongoDB usually stores it’s data files when running on MacOS under /data/db. After upgrading to Catalina 10.15, when starting MongoDB I get this error:

STORAGE  [initandlisten] exception in initAndListen: 29 Data directory /data/db not found., terminating

According to this question and answer here, Catalina no longer allows apps to read or write to non-standard folders beneath / so you need to move the data files elsewhere. After my upgrade, the files were moved to a ‘Relocated Items/Security’ folder. Moving them into my user dir and then starting up with the suggested:

mongod --dbpath ~/data/db

fixes the issue.

Revisiting my spotviz.info webapp: visualizing WSJT-X FT8 spots over time – part 6: Redesigning to take advantage of the Cloud

Following on from Part 1 and subsequent posts, I now have the app deployed locally on WildFly 17, up and running, and also redeployed to a small 1 cpu 1 GB VPS: http://www.spotviz.info . At this point I’m starting to think about how I’m going to redesign the system to take advantage of the cloud.

Here are my re-design and deployment goals:

  • monthly runtime costs since this is a hobby project should be low. Less that $5 a month is my goal
  • take advantage of AWS services as much as possible, but only where use of those services still meet my monthly cost goal
  • if there are AWS free tier options that make sense to take advantage of, favor these services if they help keep costs down

Here’s a refresher on my diagram showing how the project was previously structured and deployed:

As of September 2019, the original app is now redeployed as a monolithic single .war again to WildFly 17, running on a single VPS. MongoDB is also running on the same VPS. The web app is up at: http://www.spotviz.info

There’s many options for how I could redesign and rebuild parts of this to take advantage of the cloud. Here’s the various parts that could either be redesigned, and/or split into separate deployments:

  • WSJT-X log file parser and uploader client app (the only part that probably won’t change, other than being updated to support the latest WSJT-X log file format)
  • Front end webapp: AngularJS static website assets
  • JAX-WS endpoint for uploading spots for processing
  • MDB for processing the upload queue
  • HamQTH api webservice client for looking up callsign info
  • MongoDB for storing parsed spots, callsigns, locations
  • Rest API used by AngularJS frontend app for querying spot data

Here’s a number of options that I’m going to investigate:

Option 1: redeploy the whole .war unchanged as previously deployed to OpenShift back in 2015, to a VM somewhere in the cloud. Cheapest options would be to a VPS. AWS LightSail VPS options are still not as a cheap as VPS deals you can get elsewhere (check LowEndBox for deals), and AWS EC2 instances running 24×7 are more expensive (still cheap, but not as cheap as VPS deals)

Update September 2019: COMPLETE: original app is now deployed and up and running

Option 2: Using AWS services: If I split the app into individual parts I can incrementally take on one or more of these options:

  • Route 53 for DNS (September 2019: COMPLETE!)
  • Serve AngularJS static content from AWS S3 (next easiest change) (December 2019: COMPLETE!)
  • AWS API Gateway for log file upload endpoint and RestAPIs for data lookups
  • AWS Lambdas for handling uploads and RestAPIs
  • Rely on scaling on demand of Lambdas for handling upload and parsing requests, removing need for the queue
  • Refactor data store from MongoDB to DynamoDB

Option 3: Other variations:

  • Replace use of WildFly queue with AWS SQS
  • Replace queue approach with a streams processing approach, either AWS Kinesis or AWS MSK

More updates coming later.

Revisiting my spotviz.info webapp: visualizing WSJT-X FT8 spots over time – part 5: MongoDB Aggregate Queries

Now I have my http://www.spotviz.info app up and running with some collected spot data for testing, one of the first things I want to enhance is the date range selection for visualization, and specifically, adding more data to the heatmap to help you pick a good date range with available data.

The heatmap display I originally added gives you a visual representation of which days have data to view, but it doesn’t help you pick a time range within that day where there’s data. Here’s what the heatmap looks like right now:

The darker the color the more data there is for that day. The problem as it is right now though is that say for one day you ran WSJT-X for 1 hour between 1pm and 2pm and received spots from 2000 stations. Without knowing that there’s only data between 1pm and 2pm, you could select a range of 9am to 6pm for playback, and you’d get nothing displayed on the map until it animated through the 1pm to 2pm block and then a gap of nothing again.

My first enhancement here is to show the earliest and latest signal received times for a given day, to help you pick a good range.

The original MongoDB query (which I discussed here) retrieves a count of spots per hour for a given day:

db.Spot.aggregate( [   
{$match: {spotter: "kk6dct"}},
{"$group": {
"_id": {
"$subtract": [
{ "$subtract": [ "$spotReceivedTimestamp", new Date("1970-01-01") ] },
{ "$mod": [ { "$subtract": [ "$spotReceivedTimestamp", new Date("1970-01-01") ] }, 1000 * 60 * 60 * 24 ] }
]
},
count:{$sum: 1 }
}
}
])

There’s only one calculated value returned in the results in this query, count, but in order get the earliest and latest spots per day, this is a simple as adding two more calculated values following the return of

count : { $sum : 1 }

so the last part of the query is now:

    count : { $sum : 1 },
    "firstSpot" : { "$min" : "$spotReceivedTimestamp" },
    "lastSpot" : { "$max" : "$spotReceivedTimestamp" },

Done!