Using Avro Serializer with Kafka Consumers and Producers

Some of the Avro Serializer/Deserializer and Schema Registry classes are not available in jars from the usual maven-central repo. Confluent manage their own repository which you can add to your pom.xml with:

<repositories>
    <!-- For io.confluent Jars not in maven central -->
    <repository>
      <id>confluent</id>
      <url>http://packages.confluent.io/maven/</url>
  </repository>
</repositories>

And then you can add dependency:

<dependency>
  <groupId>io.confluent</groupId>
  <artifactId>kafka-avro-serializer</artifactId>
  <version>5.4.1</version>
</dependency>

This dependency will allow you to use the AvroSerializer in your properties:

value.serializer=io.confluent.kafka.serializers.KafkaAvroSerializer

To generate Avro Specific classes from an .avsc file following the Avro developer guide here, add the Avro dependency and generator plugin:

<dependency>
  <groupId>org.apache.avro</groupId>
  <artifactId>avro</artifactId>
  <version>1.9.2</version>
</dependency>

and the plugin:

<plugin>
  <groupId>org.apache.avro</groupId>
  <artifactId>avro-maven-plugin</artifactId>
  <version>1.9.2</version>
  <executions>
    <execution>
      <phase>generate-sources</phase>
      <goals> <goal>schema</goal> </goals>
      <configuration>.     <sourceDirectory>${project.basedir}/src/main/avro/</sourceDirectory>
      <outputDirectory>${project.basedir}/src/main/java/</outputDirectory>
      </configuration>
    </execution>
  </executions>
</plugin>

The plugin configuration is looking for .avsc schema files in the /srv/main/avro folder. An example schema file looks like this:

{
  "namespace": "kh.kafkaexamples.avro",
  "type": "record",
  "name": "TestMessage",
  "fields": [
    {"name": "firstName", "type": "string"},
    {"name": "lastName", "type": "string"}
  ]
}

The plugin will generate the Avro class for any .avsc file it finds in the configured folder.

To use Avro messages with Confluent Platform (or Confluent Cloud), you also need to specify a url to the Schema Registry, otherwise you’ll see this error:

Caused by: io.confluent.common.config.ConfigException: Missing required configuration "schema.registry.url" which has no default value.
at io.confluent.common.config.ConfigDef.parse(ConfigDef.java:251)

You also need to prefix the url with http/https, otherwise you’ll see this exception:

Exception in thread "main" org.apache.kafka.common.errors.SerializationException: Error serializing Avro message
Caused by: java.net.MalformedURLException: unknown protocol: localhost

Assuming you’re running Confluent Platform locally, the Schema Registry property is:

schema.registry.url=http://localhost:8081

To publish a message using the generated TestMessage class from the above schema:

Producer producer = new KafkaProducer<>(props);
TestMessage message = new TestMessage();
message.setFirstName("firstname");
message.setLastName("lastname");
producer.send(new ProducerRecord("test-avro-topic", "1", message));
producer.flush();
producer.close();

Done!

Random useful MacOS shortcut keys

Finder shortcuts:

Cmd-down – navigate into folder (down)

Cmd-up – navigate up from current folder

Spacebar – preview a file

Shift-Cmd . – toggles display of hidden files

Other tips:

When using multiple monitors, if you pull the mouse cursor down on a screen you can move the Dock to that monitor.

The Hobbit for the ZX Spectrum: a text adventure with interactive NPCs from 1982

After having a Lord of the Rings and The Hobbit movie marathon this weekend, I fired up a ZX Spectrum emulator and relived playing the text adventure game The Hobbit. I shared some screenshots in this thread on Twitter here:

The Hobbit was released in 1982 for the ZX Spectrum. For it’s time, it has some interesting features, like NPCs that wandered around, and language parsing of statements allowing you to interact with the NPCs, like ‘say to Elrond “read map”‘.

Given the unusual (for the time) ability to interact with the NPCs, there even exists a ZX Spectrum emulator specifically to play The Hobbit, which also shows the state of the interactive characters and objects in the game as you play. This is well worth taking a look at to get an insight into how the game works – quite an achievement for an 8 bit game in only 48k: http://members.aon.at/~ehesch1/wl/wl.htm

node.js, node-oracledb and Oracle Instant Client

To access an Oracle DB from an AWS Lambda function developed with node.js, you need to package you Lambda with shared libraries from Oracle’s Instant Client. The install instructions are here ( http://oracle.github.io/node-oracledb/INSTALL.html#quickstart ) but the only part that is really needed is the download location (since there’s no specific instructions for bundling the libs with an AWS Lambda): https://www.oracle.com/database/technologies/instant-client/linux-x86-64-downloads.html

Not all the Oracle Instant Client files are needed. From this older npm module to automate the packaging of the required libraries, I used this same list of required libraries:

libclntshcore.so.19.1
libclntsh.so.19.1
libmql1.so
libipc1.so
libnnz19.so
libons.so (not packaged in current Instant Client)
libociicus.so
libaio.so (from separate download - see next step)

libaio – if you’re on a Linux platform you can ‘apt-get install libaio’ or similar, but building my Lambda on a Mac I had to manually download the package and extract just the .so file from here (download the Arch Linux x64 package): https://pkgs.org/download/libaio

Put these in a /lib dir and zip up the folder and files. Use this to create a Lambda Layer.

For the Lambda itself install the node.js module for the api:

npm install –save node-oracledb

For examples in api usage, see the examples here: https://github.com/oracle/node-oracledb/tree/master/examples