AWS Lightsail default ssh userid

To ssh into AWS ec2 instances the default user id is usually ‘ec2-user’ (see my ec2 ssh checklist here).

Lightsail vps instances appear to use different default userids, depending on the OS. For example, for an Ubuntu Lightsail instance the default ssh userid is ‘ubuntu’:

ssh -i path-to-your-ssh-pen-file ubuntu@your-instance-ip

SSL certs upgraded, Docker images upgraded, ready to go!

I had to renew my SSL certs for this site, so while doing so I upgraded and addressed a few other issues.

First, apparently when I deployed the SSL certs last time I missed out some of the root certs in the chain. The vendor I used gives you each of the root certs individually and you need to manually concatenate them together yourself. More in another post on the steps I too to do this.

Since certs are part of my nginx Docker image, I rebuilt my image upgrading everything to latest versions. Since it was a also a couple of years since I last did this, I also had to go back through my posts here to work out the steps I took to deploy last time. I’ll post another update on the steps I took for this also later.

2 years later: 2 years of running WordPress and MySQL on Docker in a VPS

It’s been 2 years since I migrated this site from a native install on a VPS to another VPS running Docker. I covered my migration in a number of posts, the first of which is here:

The surprising thing (maybe? maybe not?) is that the site has been up and running for the past 2 years with no issues. I think I rebooted the VPS a couple of times for reasons I can’t remember, but other than that the site’s been up reliably for the past 2 years.

It’s also been 2 years since I last renewed my SSL certificate, so time to do a couple of updates. More to come later.

Revisiting my spotviz.info webapp: visualizing WSJT-X FT8 spots over time – part 6: Redesigning to take advantage of the Cloud

Following on from Part 1 and subsequent posts, I now have the app deployed locally on WildFly 17, up and running, and also redeployed to a small 1 cpu 1 GB VPS: http://www.spotviz.info . At this point I’m starting to think about how I’m going to redesign the system to take advantage of the cloud.

Here are my re-design and deployment goals:

  • monthly runtime costs since this is a hobby project should be low. Less that $5 a month is my goal
  • take advantage of AWS services as much as possible, but only where use of those services still meet my monthly cost goal
  • if there are AWS free tier options that make sense to take advantage of, favor these services if they help keep costs down

Here’s a refresher on my diagram showing how the project was previously structured and deployed:

As of September 2019, the original app is now redeployed as a monolithic single .war again to WildFly 17, running on a single VPS. MongoDB is also running on the same VPS. The web app is up at: http://www.spotviz.info

There’s many options for how I could redesign and rebuild parts of this to take advantage of the cloud. Here’s the various parts that could either be redesigned, and/or split into separate deployments:

  • WSJT-X log file parser and uploader client app (the only part that probably won’t change, other than being updated to support the latest WSJT-X log file format)
  • Front end webapp: AngularJS static website assets
  • JAX-WS endpoint for uploading spots for processing
  • MDB for processing the upload queue
  • HamQTH api webservice client for looking up callsign info
  • MongoDB for storing parsed spots, callsigns, locations
  • Rest API used by AngularJS frontend app for querying spot data

Here’s a number of options that I’m going to investigate:

Option 1: redeploy the whole .war unchanged as previously deployed to OpenShift back in 2015, to a VM somewhere in the cloud. Cheapest options would be to a VPS. AWS LightSail VPS options are still not as a cheap as VPS deals you can get elsewhere (check LowEndBox for deals), and AWS EC2 instances running 24×7 are more expensive (still cheap, but not as cheap as VPS deals)

Update September 2019: COMPLETE: original app is now deployed and up and running

Option 2: Using AWS services: If I split the app into individual parts I can incrementally take on one or more of these options:

  • Route 53 for DNS (September 2019: COMPLETE!)
  • Serve AngularJS static content from AWS S3 (next easiest change) (December 2019: COMPLETE!)
  • AWS API Gateway for log file upload endpoint and RestAPIs for data lookups
  • AWS Lambdas for handling uploads and RestAPIs
  • Rely on scaling on demand of Lambdas for handling upload and parsing requests, removing need for the queue
  • Refactor data store from MongoDB to DynamoDB

Option 3: Other variations:

  • Replace use of WildFly queue with AWS SQS
  • Replace queue approach with a streams processing approach, either AWS Kinesis or AWS MSK

More updates coming later.