Creating an AWS Aurora Serverless database

I’m looking for a low cost managed db in the cloud for a small project, so thought I’d take a look at setting up an Aurora Serverless db, as depending on usage (and my usage will be very low) it looks like it’s definitely the cheapest of all AWS RDS options.

From the Console, from RDS, I pressed the ‘Create Database’ button:

If you select Aurora, the Serverless option is way down the page here:

I kept all the defaults, but changes the capacity to the smallest options:

After taking note of generated credentials and pressing the last ‘Create Database’ button, the dialog said it would take a couple of minutes to provision, and it sure did. I wasn’t timing it but it was at least 10 minutes before it was ready. This was probably the longest I’ve every waiting to provision anything on AWS.

Once it was ready, I tried to use the online query editor, but looks like there’s an additional step to create a user:

This option is under Network and Security:

After applying the change with the immediate option, I created a test table:

Inserted a row:

And then selected all rows:


Looks good so far!

MacOS Catalina, npm global installs and zsh

As MacOS switched to zsh replacing bash a while back (you may have noticed the prompt to change to zsh in your Terminal), I keep coming across a few issues that I need to work around. My latest was that I noticed apps I’d installed globally with ‘npm install -g’ were no longer in my path.

Following a combination of suggestions from answers to this question, I added the following line to ~/.zshrc to add the npm global install dir to my path:

export PATH="$PATH:$HOME/.npm-global/bin"

Using confluent cli to start/stop a single node Kafka cluster

Install steps for Confluent Platform are here.

Using confluent cli:

confluent local status
$ confluent local start
The local commands are intended for a single-node development environment
only, NOT for production usage. https://docs.confluent.io/current/cli/index.html
Using CONFLUENT_CURRENT: /tmp/confluent.9Uym9FYU
Starting zookeeper
zookeeper is [UP]
Starting kafka
kafka is [UP]
Starting schema-registry
schema-registry is [UP]
Starting kafka-rest
kafka-rest is [UP]
Starting connect
connect is [UP]
Starting ksql-server
ksql-server is [UP]
Starting control-center
control-center is [UP]
confluent local stop

Kafka producer: ip addresses and host names

Scenario:

  • Kafka client 2.4.0
  • Java 1.8.0_151
  • Kafka cluster is running on a machine with hostname ‘ubuntu-confluent’
  • Producer has bootstrap.servers=10.0.0.x (ip of same host as ubuntu-confluent)
  • At runtime, it appears hosthame is passed back to client
  • Subsequent network calls from client back to cluster appear to use hostname instead of ip, and it fails

Exception on client:

2020-03-08 21:26:06,144 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient [] - [Producer clientId=producer-1] Error connecting to node ubuntu-confluent:9092 (id: 0 rack: null)
java.net.UnknownHostException: ubuntu-confluent: nodename nor servname provided, or not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) ~[?:1.8.0_151]
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) ~[?:1.8.0_151]
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323) ~[?:1.8.0_151]
at java.net.InetAddress.getAllByName0(InetAddress.java:1276) ~[?:1.8.0_151]
at java.net.InetAddress.getAllByName(InetAddress.java:1192) ~[?:1.8.0_151]
at java.net.InetAddress.getAllByName(InetAddress.java:1126) ~[?:1.8.0_151]
at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:104) ~[kafka-clients-2.4.0.jar:?]

Easiest fix is to just add an entry to /etc/hosts.