Moving GoDaddy DNS to AWS Route 53

Before changing the DNS settings on GoDaddy, set up Route 53 to manage DNS for your domain first, because you’ll need the AWS DNS server names when updating the GoDaddy DNS config.

In the AWS Console, go to Route 53, then Hosted Zones, and press the ‘Created Hosted Zone’ button:

Enter your domain name, select ‘Public Hosted Zone’ and press ‘Create’:

At this point Route 53 will have assigned a list of DNS nameservers for your domain – remember this list for when you update the GoDaddy config

Next, press ‘Create Record Set’, select Type A record and enter the name of your subdomain, e.g. www, and enter the IP address of your server that this name should resolve to:

Now let’s update the GoDaddy config. By default, here’s the DNS entries managed by GoDaddy when you register a domain with them:

First step, cancel the GoDaddy nameservers. Do this from the ‘Change’ button:

Change the type to Custom:

Then enter the dns server names from Route 53 and press ‘Save’. You should now be done. Assuming the setup on Route 53 has propagated, hit your domain name in a browser and hopefully you’re up and running.

Creating a new AWS Lambda using the AWS CLI

It’s pretty easy to set up and configure a new AWS Lambda with the AWS Console, but if you’re iterating on some changes and need to redeploy a few times, the AWS CLI makes it pretty easy.

To create a new Lambda, assuming you have a .zip packaged up and ready to go:

aws lambda create-function --function-name example-lambda --zip-file fileb://example-lambda.zip --handler index.handler --runtime nodejs8.10 --role arn:aws:iam::role-id-here:role/lambda-role

Extracting (scraping) webpage content with Puppeteer

I’m working on a personal project to extract some content from a number of websites. This idea started out trying to use a REST api to a site but it turned out to be far more complicated involving far more steps that I was prepare to invest the time in to get working, so I realized I could just scrape their HTML content instead to get what I was looking for.

There are many HTML scraping and parsing libraries. I took a look at x-ray and a few others, but what I discovered is that many sites detect the fact that you’re not browsing the site in a real browser (presumably from checking your user-agent and cookies etc) and then force you into completing CAPTCHAs etc to prove that you’re not a robot.

I then stumbled across Apify, and more specifically Puppeteer. I think Apify does far more than I need for this little project. Instead, since Apify can also use Puppeteer under the covers, I found that just using Puppeteer directly does all that I need.

Here’s a small script for extracting some text from an example site:

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://www.example-website.com');
const extractedValue = await page.$eval('#element-id', el => el.innerHTML);
console.log('extracted value: ' + extractedValue);

await browser.close();
})();

Learning Golang (part 2)

It’s been several months since I started to look at Golang, but I’ve been picking it back up again. The design decisions in the language are starting to make me like this language more and more. It’s clear there were decisions made to address many pet peevs with other languages. Here’s a few I’ve noticed:

  • standard formatting and the ‘go fmt’ tool. In many other languages there’s endless debates about where your { and } should go, and honestly it really doesn’t matter. Go addresses this by requiring a single style where the opening { is syntactically required at the end of the line (and not allowed on the following line) in order for your code to compile. End of argument. Done
  • a single for loop for iteration. Other languages support for and while in a few different variations (while condition at the beginning of a block or at the end), but Go has a ‘for’ statement and that’s it. Keep it simple.
  • much time has been wasted in Java arguing about whether exceptions should be checked or unchecked. Go’s approach, no exceptions, no exception handling. You have an ‘error’ type which you can chose to return as the last return value from a function if needed

Here’s my notes following on from my earlier Part 1.

Imports

Imports are each on a separate line, with the package in quotes:

import "fmt"
import "unicode/utf8"

Variables

Variable types are defined after the variable name, both in variable declaration and function parameters:

func ExampleFunc(x int, y int) int {
var example int
var example2 string
...
return 1
}

This function takes 2 params, both ints, and returns an int.

Unlike other languages where you can pass many params but only return a single result, Go functions allows you to return multiple results, e.g.

func ExampleFunc(x int, y int) (int, int) {
var example int
var example2 string
...
return a, b
}

To call this function and receive the multiple results:

x, y := ExampleFunc(1, 2)

If you’re only interested in one of the return values you can use the blank identified to ignore one of the returned values:

_, y := ExampleFunc(1, 2)

Functions with a capital first letter are exported so they can be imported elsewhere, functions with a lowercase first letter are not.