Updating my AWS Lambda Sudoku Solver to generate new puzzles (part 1)

Having spent some time in the past building an implementation of Donald Knuth’s Algorithm X in Java to solve Sudoku Puzzles, I recently wondered what it would take to modify it to generate new puzzles.

If you missed my previous posts on this investigation, see:

It turns out having a working solver is part of the way there to implementing a puzzle generator, because you need to be able to check if a puzzle has a single solution, since valid puzzles only have 1 solution.

When I last wrapped my Solver as an AWS Lambda, I had taken the naive approach to call System.exit() in my Solver code if I detected there was more than 1 solution as a quick way to exit and not get stuck in a loop iterating over finding possible thousands of solutions for a grid that’s not a valid puzzle.

I went back and took another look at this and reworked it so I could pass an upper limit for number of puzzles, and changed the return type to return a List of solutions, and a flag indicating if there was a single solution or not. Latest commits on my repo have these changes, and I’m now ready to move on to building the puzzle generator. More updates coming soon.

Creating a new AWS Lambda using the AWS CLI

It’s pretty easy to set up and configure a new AWS Lambda with the AWS Console, but if you’re iterating on some changes and need to redeploy a few times, the AWS CLI makes it pretty easy.

To create a new Lambda, assuming you have a .zip packaged up and ready to go:

aws lambda create-function --function-name example-lambda --zip-file fileb://example-lambda.zip --handler index.handler --runtime nodejs8.10 --role arn:aws:iam::role-id-here:role/lambda-role

Extracting (scraping) webpage content with Puppeteer

I’m working on a personal project to extract some content from a number of websites. This idea started out trying to use a REST api to a site but it turned out to be far more complicated involving far more steps that I was prepare to invest the time in to get working, so I realized I could just scrape their HTML content instead to get what I was looking for.

There are many HTML scraping and parsing libraries. I took a look at x-ray and a few others, but what I discovered is that many sites detect the fact that you’re not browsing the site in a real browser (presumably from checking your user-agent and cookies etc) and then force you into completing CAPTCHAs etc to prove that you’re not a robot.

I then stumbled across Apify, and more specifically Puppeteer. I think Apify does far more than I need for this little project. Instead, since Apify can also use Puppeteer under the covers, I found that just using Puppeteer directly does all that I need.

Here’s a small script for extracting some text from an example site:

const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://www.example-website.com');
const extractedValue = await page.$eval('#element-id', el => el.innerHTML);
console.log('extracted value: ' + extractedValue);

await browser.close();
})();

WSJT-X reports time difference > 2 seconds on Windows 10

All of the JT modes (JT65, JT9), and now the new mode FT8 are sensitive to correct system time within a couple of seconds. This is because the transmit/receive cycle starts and stops at a predetermined time, and the software of each transmitting station and each remote receiving station need to be in sync in order for each transmission to get successfully decoded.

FT8 seems to be forgiving within about 1.5 seconds of difference, but above this it flags the delta between stations in the DT column:

On a new laptop with Windows 10, even though the date and time is set to set automatically using a remote time server on the internet, almost every FT8 transmission received was consistenly > 1.5 seconds in difference.

To fix this I downloaded and installed Dimension 4 and set it to start automatically. Sync’d the time and now we’re all set – you can see the received calls below in the DT column are all now within a couple of 0.1s after the sync: