Installing Windows Insider beta Windows 11 ARM on a Macbook Pro M1 with UTM/QEMU

I downloaded the Windows Insider beta of Windows 11 for ARM, and took a look at what’s involved to get it installed and up and running under UTM?QEMU on a Macbook Pro with an M1 CPU (ARM).

First, since the download is a .vhdx disk, I used the import option in the UTM frontend to import from the downloaded file. I also checked the option for the Spice drivers:

After starting to boot, the installer requires a network connection and it appears to get stuck:

Articles such as this one recommend to press Shift-F10 to get to a command prompt, and then enter this command to continue with the install skipping the requirement for a network connection: oobe\bypassnro

At this point the installer reboots and restarts, and this time you get an additional option on this dialog allowing you to skip the network requirement during installation:

After a short setup of a couple of minutes, Windows 11 desktop!

Next, the resolution seems to be fixed at 800×600, so knowing I checked the box in UTM for the Sprice/virtio drivers, I noticed there was a CDROM iso mounted on drive D:

Running the installer, it started up, and keeping all the defaults for now, installed without any issue:

All in all, pretty easy, only about 20 mins setup and it seems pretty snappy so far!

Investing in your skills development – how do you choose where to invest? (part 2).

I’ve written before about the importance of keeping your skills up to date, and this is a followup to a previous post, answering the question ‘how do you decide where you should spend your time?’

You can never keep up with everything, every new programming language, every new tech stack, every new trend; you need to understand and acknowledge that first.

Next, decide what matters to you, and where you want to spend your (limited) time keeping up to date. This is different for everyone, but it could be any combination of:

– keeping up to date with the tech you’re currently working with (all or parts of)

– keeping (or getting) up to date with ‘something else’ that maybe next on your horizon

– keeping an eye on upcoming and emerging tech trends. Trends come and go over time, some last longer than others. You can’t jump on every new thing that comes up, so you need to make your own decisions on whether something is likely to be part of your future or not. In other words, would it be worth investing your time in this trend or not?

The last point is hard when starting out because you have no past experience to compare against, but spending some time reading the chatter online will give you a rough feeling of whether something is increasing in popularity or not.

As with everything, you need to sensibly assess tech trends in order to work out what is pure hype and will never go anywhere, vs things where there’s some substance and is likely to evolve into something you should get more familiar with.

No, AI models will not replace programmers any time soon

This month’s “Communications of the ACM” magazine (01/2023) published a rather alarmist article titled ‘The End of Programming’. While it is a well written article, it bets heavily on the future usefulness of AI models like ChatGPT to generate working code, replacing the need for programmers to write code by hand. ChatGPT is currently getting a lot of attention in the media and online right now, with people finding out that not only can you ask questions on any topic and get a believable answer, you can also ask it a more practical question like “show me C code to read lines of a file”.

Finding out that ChatGPT can be used to ‘generate’ code is prompting questions online from new developers posting questions like ‘should I start a career in software development when programmers are likely going to be replaced by ChatGPT?’

The tl;dr answer: ChatGPT is not replacing anyone any time soon.

While development and improvement of these types of AI model is going to continue, it’s worth keeping in mind that these models are only as good as the material they are trained on, which also means they’re limited by the correctness or usefulness of the material used for training. This also means they are subject to the age old problem of ‘garbage in, garbage out’. What’s not being discussed enough is that these current models do not understand the content they generate. They also have no understanding of whether any of generated content is correct, either factually correct for text, or syntactically correct for code snippets. Unlike these ML trained models, as humans we use our existing knowledge and experience to infer other missing details from what we read or hear. We’re also good at using our existing knowledge to assess how correct or realistic new information is based on what we already know to be true. AI models currently do not have this level of understanding (although research has been attempting to replicate ‘understanding’ and ability to make decisions based on existing facts for years (Google ‘expert systems’ for more info).

I’ve seen developers recently attempting to answer questions on Stack Overflow, Reddit and other sites using ChatGPT, with and without success based on whether the topic of the subject was within the scope of materials the model was trained with.

The current problem with text generation from models is that the models lack context. The current models don’t understand context, and so can attempt to generate a response based on identifying key words from the input prompt, but that doesn’t always result in an answer the same way as if a human would answer the same question. Model also don’t understand intent. A question can be asked in a number of similar but different way, and to another human you may be able to infer the intent or purpose of the question, but to a current general purpose trained ML models, that’s currently not possible.

In its current form, ChatGPT is trained on materials currently available online, websites with both static articles and reference materials, as well as question and answer discussion sites. The limitation with this approach is that if I ask a very specific question like ‘show me example code for building a REST api with Spring Boot’, there are plenty of examples online and assuming the model was trained on at least some of these, then the resulting answer could incorporate some of this material. The answer isn’t likely to be better than anything you could have found yourself online if you just Googled the same question. There could be some benefit from having an answer as a conglomeration of text from various sources, but that can also mean that the combined text ends up being syntactic gibberish (the model doesn’t currently know if what it’s returning to you is syntactically correct).

It’s clear that there is promise in this area to be used to aid and support developers, but as a complete replacement for all custom software development work in it’s current form, this seems highly unlikely, or not at least within the next 10 years, and possibly even longer.