Creating an AWS Aurora Serverless database

I’m looking for a low cost managed db in the cloud for a small project, so thought I’d take a look at setting up an Aurora Serverless db, as depending on usage (and my usage will be very low) it looks like it’s definitely the cheapest of all AWS RDS options.

From the Console, from RDS, I pressed the ‘Create Database’ button:

If you select Aurora, the Serverless option is way down the page here:

I kept all the defaults, but changes the capacity to the smallest options:

After taking note of generated credentials and pressing the last ‘Create Database’ button, the dialog said it would take a couple of minutes to provision, and it sure did. I wasn’t timing it but it was at least 10 minutes before it was ready. This was probably the longest I’ve every waiting to provision anything on AWS.

Once it was ready, I tried to use the online query editor, but looks like there’s an additional step to create a user:

This option is under Network and Security:

After applying the change with the immediate option, I created a test table:

Inserted a row:

And then selected all rows:


Looks good so far!

MacOS Catalina, npm global installs and zsh

As MacOS switched to zsh replacing bash a while back (you may have noticed the prompt to change to zsh in your Terminal), I keep coming across a few issues that I need to work around. My latest was that I noticed apps I’d installed globally with ‘npm install -g’ were no longer in my path.

Following a combination of suggestions from answers to this question, I added the following line to ~/.zshrc to add the npm global install dir to my path:

export PATH="$PATH:$HOME/.npm-global/bin"

Kafka producer: ip addresses and host names

Scenario:

  • Kafka client 2.4.0
  • Java 1.8.0_151
  • Kafka cluster is running on a machine with hostname ‘ubuntu-confluent’
  • Producer has bootstrap.servers=10.0.0.x (ip of same host as ubuntu-confluent)
  • At runtime, it appears hosthame is passed back to client
  • Subsequent network calls from client back to cluster appear to use hostname instead of ip, and it fails

Exception on client:

2020-03-08 21:26:06,144 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient [] - [Producer clientId=producer-1] Error connecting to node ubuntu-confluent:9092 (id: 0 rack: null)
java.net.UnknownHostException: ubuntu-confluent: nodename nor servname provided, or not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) ~[?:1.8.0_151]
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) ~[?:1.8.0_151]
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323) ~[?:1.8.0_151]
at java.net.InetAddress.getAllByName0(InetAddress.java:1276) ~[?:1.8.0_151]
at java.net.InetAddress.getAllByName(InetAddress.java:1192) ~[?:1.8.0_151]
at java.net.InetAddress.getAllByName(InetAddress.java:1126) ~[?:1.8.0_151]
at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:104) ~[kafka-clients-2.4.0.jar:?]

Easiest fix is to just add an entry to /etc/hosts.

Retrieving ADS-B transponder data from dump1090

dump1090 is probably the goto solution for receiving ADS-B transponder signals from planes flying overhead because

a) it runs with a cheap rtlsdr USB dongle (more info on dongles here)

b) runs on a cheap $35 Raspberry Pi

I’ve always wondered how it would be possible to get data out of dump1090 to use within other apps. It provides a data feed on port 30003 that is relatively easy to capture by using a util like netcat. If you have another app to receive/parse/process the data, this is as easy as:

nc ip-of-pi 30003 | app-to-parse-data

I have a project ‘in-flight’ right now using this approach… more updates later.