Game development in progress: Space Invaders clone – update 3 (18 months later)

It’s been a year and a half since I’ve given an update on my Space Invaders clone on Android. Admittedly I haven’t been working continuously in my spare time on this project for 18 months, in fact the last time I remember working on it was several months back. What I got stuck on was updating the animation routines so the animation would display with a constant frame rate across different Android devices, and the display of the sprites to look consistent for devices with different sized screens, resolutions and pixel densities.

If you’re a seasoned game developer this is probably nothing new to you. If you only develop for a platform with identical hardware specs (like a game console) then this is probably something you don’t have to worry (too much?) about. However with the broad range of specs for Android devices, even testing with the device emulator for different phones, it’s pretty obvious unless your code handles these differences, your game might be fast on one device but slow on another, or the sprite layout may look at intended at the resolution on one phone but be too small on another phone.

I spent a bunch of time reading game dev articles about implementing approaches for constant frame rates, and tried to incorporate what I’d learned, but still, I’ve got some weird quirks I need to iron out.

Here’s what the game looks like on an emulated Pixel:

I was initially developing and testing on the emulated Nexus devices as my baseline test target, but it runs fine, as expected on the Pixel too (it’s harder to play when pressing the buttons with the mouse!).

Now running on an emulated Pixel 2 XL, it runs fine for a while until the number of invaders gets down to where I speed up. The speed up is too fast, and then for some reason that I haven’t found yet they animation stops before the game ends but the game is still playing:
Clearly I’ve still got some work to do here, but it’s getting close.

Installing OS/2 4.52 (Warp 4) on VMware ESXI

Installers with floppy disk boot images require swapping the disk images which is tedious. Find ISO images with 1 boot CDROM ISO and 1 install CDROM ISO – this is a much easier approach for installing. The Boot ISO and English ISO from this collection on archive.org work well.

Create a VM with:

  • 1 vCPU
  • 32MB RAM
  • 500MB disk

In ESXi this looks like:

Attach the iso boot image and boot the VM:

Remove the boot ISO cdrom and switch to the English language cdrom the press Enter:

F5 to switch to Physical View:

Tab to the [free space] in the second section, press Enter for Options, and create new Primary partition:

Press F5 to change back to Logical View, press Enter for Options:

Haven’t created or selected a logical volume, this forces you back to the Logical Volume Manager:

Choose the physical partition we created:

Now this looks like:

Set the volume to be installable:

Switch the ISO image back to the boot ISO and restart the VM.

On restarting the VM, with ESXi for some reason it changes the boot order at this point to the HD first, CDROM image second, so you’ll get a blank black screen on startup.

Power off the VM.

Go into your VM options and check the option to force boot to BIOS on next start:

Power on again.

When the BIOS menu comes up, go to Boot and change the CDROM entry to it is first in the list with the + key, then Save and Exit:

When booted, swap CDROM ISO image to the second install ISO when prompted.

You’ll now see the Welcome install screen again. Enter through the next few screens until you get to the partition selection screen, Accept the partition we created earlier and do a ‘Quick format’ when prompted:

Select HPFS:

The install from the ISO image goes pretty quick, then you’ll see this screen:

Followed by a reboot. You’ll now see the first of the config option dialogs:

You can leave the graphics as default, or press the button and switch the the GRADD drivers (which from memory are the better drivers to use):

Next/Ok through the the next few screens, then you’ll get to the optional installs:

I left the selected options, press Next:

Complete the registration screen, and then you get more options, I unchecked File and Print Sharing, and left the other pre-selected options:

Press Next, if there’s anything that needs additional config it will be flagged here, otherwise press Install:

The install goes pretty quick from here and will reboot at least once:

“IBM Means 3 Things”:

After another reboot if you get a blank screen with network card info, press Enter to continue then you’ll get to your desktop with more options – I selected Java 1.3 to take a look, and the IBM Web Browser:

At this point the installer doesn’t see my CDROM image even though it’s attached, but pressing Exit take me to the desktop.

Welcome to OS/2 Warp 4:

The network adapter wasn’t configured with DHCP by default, so from the TCP/IP folder on the desktop, find TCP/IP Configuration (Local) and enable the first interface and DHCP:

You’ll be prompted to reboot again, but now you should have an IP, and if you open Netscape, you’ll be able to browse the web, although with some rendering issues for sites using features not supported in this older version of Netscape.

 

Local Jira server install: Unable to search: “An unknown error occurred while trying to performa search”

On starting up my VM where I have Jira installed, all my logged issues are not displaying, and there’s errors about searching and indexing:

On the Admin / Advanced / Indexing page it shows:

This page on search and indexing issues and a number of other pages and articles talk about deleting the temp Lucene index and cache files, but the docs and other posts miss the important part of stating where these files are.

This page gives a good overview of the file structure of Jira, but doesn’t talk about the Lucene indexes.

This page talks about deleting the Lucene indexed at $JIRA_HOME/caches/ but doesn’t say where $JIRA_HOME points to. It isn’t the /opt/atlassian/jira directory structure mentioned by the previous article, but there isn’t a caches there or anywhere below that directory.

Not knowing where else to look, I just did a find from the root for ‘caches’ and found the location elsewhere here:

$ sudo find . -type d -name caches

./var/atlassian/application-data/jira/caches

Ok. Stopping my server with

sudo /etc/init.d/jira stop

and then moving the caches/indexes folder to indexes-old, retstarting Jira with:

sudo /etc/init.d/jira start

and now there’s a new error about Lucene:

Ok. Clicking the Find out More link shows the results of this health check:

Clicking the How do I resolve this link takes you to this page, which suggests to do a re-index, which is from the Admin / Advanced / Indexing page and where I got the second error originally. Going back there and trying the ‘lock and re-index’ option which was recommended in some of the other index related issus posts:

I then got this:

Ok, no errors! Let’s see if my logged issues are back.

They’re back! Now I’m back in business!

 

Building and deploying Docker containers using GitLab CI Pipelines

As part of migrating this blog to Docker containers to move to a different VPS provider (here, here and here), I found myself repeating a number of steps manually, which always a good indication that there’s an opportunity to automate some or all of those steps.

Each time I made a change in the configuration or changed the content to be deployed, I found myself rebuilding the Docker image and either running locally, pushing to my test server, and eventually pushing to my prod VPS and running there.

I’m using a locally running GitLab for my version control, so to use its build pipeline features was a natural next step. I talked about setting up a GitLab runner previously here – this is what performs the work for your pipeline.

You configure your pipeline with a .gitlab-ci.yml file in the root of your repo. I defined 2 stages, build and deploy:

stages:
 - build
 - deploy

For my build stage, I have a single task which is to build my images using my docker-compose.yml:

build:
 stage: build
 script:
 - docker-compose build
 tags:
 - docker-test

For my deploy steps, I defined one for deploying to my test server, and one for deploying to my production VPS. This is the deploy to my locally running Docker server. It changes DOCKER_HOST to point to my test server, and then uses the docker-compose.yml again to bring down the running containers, and bring up the new containers with the updated images:

deploy-containers:
 stage: deploy
 script:
 - export DOCKER_HOST=tcp://192.x.x.x:2375
 - docker-compose down
 - docker-compose up -d
 tags:
 - docker-test

And one for my deploy to production. Note that this step is defined with ‘when: manual’ which tells GitLab the task is only run manually (when you click on the ‘>’ run icon):

prod-deploy-containers:
 stage: deploy
 script:
 - pwd && ls -l
 - ./docker-compose-vps-down.sh
 - ./docker-compose-vps-up.sh
 when: manual
 tags:
 - docker-prod

Here’s what the complete pipeline looks like in GitLab:

With this in place, now any changes committed to the repo result in a new image created and pushed to my test server automatically, and when I’ve completed testing the changes I can optionally deploy the changes to my prod VPS hosted server.