Disk already full on new custom build PC specifically for Microsoft Flight Simulator: how-to move your packages folder to a different disk

In less than 10 months the disk on my new custom PC I built specifically for play Microsoft Flight Simulator is already full:

I put a 500GB m.2 NVMe drive in this machine, so while 500GB is not massive, it’s surprising that the updates, patches and scenery cache so far has already filled the entire disk. This main NVMe was a Corsair Gen 4 m.2, so was slightly more pricey than alternatives but faster than the other Gen 3 m.2 sticks at the time. I have since added a second 1GB m.2 stick, a cheaper but not as fast Sabrent Rocket.

Rather than reinstalling MS Flight Sim from scratch (which would have required another multiday download from Steam), I moved the packages folder from C: to D: – steps to do this are described here.

In summary for Steam installs, edit the UserCFG.opt file in this location:

C:\Users\YourUsername\AppData\Roaming\Microsoft Flight Simulator

… and change this line:

InstalledPackagesPath "C:\Users\YourUsername\AppData\Roaming\Microsoft Flight Simulator\Packages"

… to point to any new location. You can move the existing content of this folder to the new location, and when you restart, any new updates or add-ons will go to this new location.

Microsoft Flight Simulator 2020 Update 5 : First Impressions

Update 5 for Microsoft Flight Simulator was released on on 7/27/21 this week, and my first impressions (after waiting 2 days for my download to complete) are … WOW.

It’s smooth, buttery smooth. In 1080p with a Ryzen 5 3600XT and a RTX 2060 I was previously getting around 30 fps at best, and dipping below that in heavy city areas like New York. After the update with the same settings I’m now getting around 60fps, but the even without the increase in fps, the sim is noticeably smoother with less stutters that I used to notice before.

There’s plenty of other updates and changes:

  • the world map now shows satellite imagery and place names on the map which makes it much easier to find interesting places to fly (unlike the plain grey non-descript map we had before)
  • while the game load time seems about the same, the load time from creating a flight from the map and arriving on the runway ready to fly has definitely been improved (it seems at least twice as fast as it was)
  • particle effects have been added for water, dirt and snow landings. For float planes, you now get a somewhat realistic wake behind your plane when you land or taxi on water. This is a massive improvement over none at all that we had before
  • there’s a number of ski and float plane options added for the stock planes
  • seaplane harbors have been added to the map, so you can start a flight from a harbor or see the locations of harbors on the VFR map
  • more POI markers on the maps and in game as you’re flying

I’m sure there’s plenty more to find, but these are the major changes I’ve seen so far.

Am I happy with the update so far? Yes, definitely. I’ve had one crash to desktop (CTD) so far after just flying for an hour, which is far more that I’ve experienced before. Posts in the forums suggest a lot of CTDs right now, so hopefully there’ll be more fixes to come. Right now though I’m enjoying being able to take off and land from water and have it look somewhat realistic, and feature we had on most of the MS FS versions.

As a comparison, shortly after launch I was comparing taxiing on water near the Verrazzano-Narrows Bridge in FSX:

Verrazzano Narrows Bridge, NY – Microsoft Flight Simulator X

Here’s what it looks like after Update 5 with the wake effects (but still a shame about the rendering of the bridge itself):

At least the water wake effects now look realistic (where previously there wasn’t any):

… those solid bridges really need to get fixed though.

On the plus side, one final screenshot – how stunning is this?

AWS CloudFormation example for S3 bucket

Typical Cloudformation for an S3 bucket with block all public access enabled:

Resources:
  S3BucketExample:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: s3-bucket-name
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true

Grading the difficulty of a Sudoku puzzle

I’ve made a couple of previous attempts at developing Sudoku solvers, one naive approach which didn’t work too well, and a second, more informed attempt implementing Donald Knuth’s Algorithm X, which works incredibly well.

I thought I would turn this around and instead of building a solver, attempt to build a puzzle generator. As I already have a working solver (I’ve also build a React frontend for the Algorithm X solver that runs as an AWS Lambda), checking if I have a valid puzzle (one with only a single solution) is easy as I can reuse my existing solver. The hard part it turns out to be how you grade the difficulty of a puzzle, i.e. is a puzzle easy, medium or hard? This turns out to be a subjective point of view, based more personal opinion rather than any established levels of measuring difficulty. It’s interesting to note that apparently unlike a solver, there are no established mathematical approaches for grading a Sudoku puzzle.

Ok, so if it’s subjective, how to we write an app to automate the grading? First, the degree of how easy or complex a puzzle is determined using the same approaches to solve the puzzle as a human attempting to solve a puzzle does. The range and number of different approaches needed to solve a puzzle can be used to determine the relative difficulty. This is where the subjective nature comes in.

It seems to be commonly agreed that a puzzle rated as ‘simple’ can be solved solely by identifying ‘naked singles’ and/or ‘hidden singles.’ I’m not going to define all of the solving techniques as they’re described in many places online already, but for reference for the first couple see here:

Next for ‘medium’ difficulty, if you need to use naked pairs and hidden pairs, this puzzle is commonly considered medium difficulty.

Similarly for hard, if you need to used naked triples and/or hidden triples.

At this point more complex techniques such as x-wing and swordfish seem to be used to determine very hard or expert level, but there seems to be some variation on what level of difficulty require these techniques. There’s also many other techniques, some are listed here.

Grading Puzzle Examples

Given that there’s no standard approach to grade a puzzle, it’s probably going to be common that if you take an example puzzle from any source, the approaches used to grade that puzzle might be different from the rules you would apply, and therefore there’s likely to be some variation in difficulty of puzzles.

Let’s take a couple of example puzzles from sources online and run them through my grader so far so see if we’re on the right track.

Here’s an ‘easy’ puzzle example from https://www.websudoku.com :

Running my grader against this puzzle, I get:

Puzzle solved: Yes
Initial givens: 35
Passes through grid: 7
Naked singles found: 42
Hidden singles found: 0

This puzzle can be solved only by finding naked singles, therefore by my grading system, this is rated as ‘easy’ difficulty.

This next example needs additional approaches to solve, although it needs two approaches that would still be considered at the easy level, naked singles and also hidden singles:

This is a hard example from SukokuWiki:

Output from solver:

Puzzle solved: Yes
Initial givens: 44
Passes through grid: 35
Naked singles found: 6
Hidden singles found: 30
Naked pairs found: 17

Here’s an example where the source of this puzzle rates this one as difficult, but applying my ranking criteria my approach would classify this one as a medium (since it requires more than finding just singles, it requires finding pairs, but not any other techniques other than pairs).

Summary

Writing a solver following recognized algorithms like Dancing Links and Algorithm X turned out to be much simpler than developing a human ranker. For the ranker it took significant time to get the approaches implemented so they were working correctly and to the point where they could actually solve a puzzle. As this was a personal time project and I was only working on this a couple of hours a week, it took the majority of 2020 to get this complete and working. By comparison the Algorithm X solver I was able to complete in a couple of hours a week over about 2 weeks.

Completing this grader was one step towards implementing a puzzle generator. This is now also complete and running a couple of times a day, generating new puzzles which you can load via my React frontend here.