Replacing a MacBook Pro optical drive with a SSD: stripped screws a-plenty

Older model MacBook Pros typically came with a rotational hard disk and an optical  disk. Some models had a 6Gbps SATA controller for the HDD and a 3Gbps controller on the optical drive bay. It’s worth checking in the System Information tool if the controller for the optical bay is not slower than the HDD bay. If it is then you might want to consider swapping out your HDD for the SDD. If both bays are 6Gbps on both sides, then it’s ok to put an SDD in the optical bay and not limit it’s throughput.

My mid-2012 MBP has 6Gbps on both bays:


Optical bay:


I used an OWC drive doubler bracket to put my SSD into my optical bay. Here’s the patient open and ready to receive it’s new drive. Existing HDD at the top right, optical drive bay bottom right. The bag of tools comes with the OWC bracket:

 The OWC bracket is more pricey at $29 on Amazon, compared to the cheaper alternatives at < $10, but the difference in price seems to be you get everything you need in be box, including tools, replacement screws, and a manual. The manual is incredibly detailed and covers step by step with photos for each MBP model that the bracket fits. Find you model, follow the steps, done.

The replacement probably should take you less than an hour, but I ran into one of the soft black screws that wouldn’t budge and it stripped pretty much instantly. I tried the elastic band trick, I tried supergluing a screwdriver to the screw.., no good.

Drilling out a stripped screw is probably the last resort, unless you can reach it with a dremel and cut a slot into the top. This one was recessed, so did some reading around and a ‘Grabit’ seemed to be the way to go.

The screw in question for me was the larger one in step #8 in iFixit’s instructions here. The instructions even say:

Take care, as these screws are unusually easy to strip

Yep. I think that should actually say:

These screws are guaranteed to strip. Make sure you have tools at hand to remove them when stripped.

The Grabit Micro #1 and #2 did the job for me. The #1 seemed the one to use. Using the drill end, it took a while to drill a whole into the top of my stripped screw. Flipping the drill bit around to the extraction end, it didn’t catch like it was supposed to. At that point I thought my only option was to drill the screw out, so I swapped the next up size and started slowly drilling, but the drill bit end actually caught inside the hole. Since the drill and extractor ends both turn anticlockwise, it immediately started to remove the screw. Phew!

So hows the SSD? It’s awesome. Whereas before El Capitan seemed to take more than a minute (I hadn’t timed it, but roughly) to cold boot on my i7 2012 MacBook Pro, from a clean install on this SanDisk SSD, it boots to logon in around 6 to 7 seconds. Pretty damn incredible. It boots from cold it the same time it would take to come out of sleep from my HDD. And using OS X is incredibly damn fast and fluid. My 2012 MBP has a couple more years of life to go 🙂

Creating an OS X El Capitan install flash drive

Format the USB Flash Drive using Disk Utils:

The volume name in the next step is /Volumes/name-you-gave-the-volume-in-the-first-step.

Copy install files to the Flash Drive using createinstallmedia:

To boot from the flash drive, reboot your Mac holding down the Option key and the choose the icon for the flash drive.

Installing OpenJDK 8 on Linux Mint 17.3

Linux Mint has OpenJDK 7 available in the default repos, but not 8 for some reason. You have a couple of options:

To install OpenJDK 8 from a restricted PPA (instructions from here):

sudo apt-add-repository ppa:openjdk-r/ppa

sudo apt-get update
sudo apt-get install openjdk-8-jdk


Or you can install Oracle Java 8, downloading the .far file from here:

gunzip, tar xvf and then move the jdk1.8.0_xxx dir somewhere like /opt/java, and then (from instructions here):

sudo update-alternatives --install "/usr/bin/java" "java" "/opt/java/jdk1.8.0_102/bin/java" 1
sudo update-alternatives --set java /opt/java/jdk1.8.0_102/bin/java

Windows 10 Anniversary Update – not as smooth as it could have been

There’s been a lot of stories in the news the past few days since the Windows 10 Anniversary Update went live of people running into issues with unexpected hangs and blue screen crashes after the update (for example). Even the update itself does not seem to be as smooth as it could have been.

On my HP desktop, I went through the first few reboots during the upgrade and then got stuck at a gray screen with the rotating progress balls. There was some disk activity every couple of seconds so I left it running, checking back on it every hour or so, but it stayed at this point for a whole day. Thinking it had just hung, I rebooted it and got a ‘Recovering your installation’ message, and then it seemed to pick up again at the “Working on Updates” progress screen. After about another 15mins and sitting at 91% complete, there’s a ton of disk activity so I’ll leave it running and check back on it later today.

If you’ve run into other issues during the upgrade, there’s a list of commonly seen issues so far in this article that might be useful.

Why hasn’t the evolution of software development tools kept pace with other changes in the development of software systems?

Many changes in IT occur as a indirect result of the development or introduction of some other technological change. Not all of these changes involve development or invention of a completely new technology, some changes are related to increased availability or reduced cost of a resource that was already previously available, but reduced costs allow for example making use of commodity, off-the-shelf x86/AMD64 servers to build a system compared to buying previously more expensive, branded systems from a big name supplier (e.g. IBM or Oracle).

Some changes in development of IT systems appear to come and go more like fashions in clothing; what is today’s hot new trend is tomorrow’s old news, but some trends seem to come back in fashion again at some point in the future. IT trends are less like a linear timeline of incremental improvements, they’re more like a churning cycle of revolving ideas that gain popularity and then fall out of favor as we strive to find what works versus what doesn’t, what’s more efficient, or what’s more effective.

As an example, computer systems in the 1960s and 70s were mainly centralized systems, where computing resources were provided by hardware in a central physical location, at this time usually a mainframe, and accessed remotely by users via terminals. The terminal device had no or little processing power itself, all computing resources were provided by the centralized system.

After introduction of the IBM PC and clones, computing resources became available on the user’s desk rather than locked up in the computer room. This allowed development of systems where part of the processing could be provided by an application running on the user’s desktop, and part provided by backend resources running remotely. This application style is called client/server.

Every architectural decision has pros and cons – while client/server systems reduced need for centralized processing resources as some processing is off-loaded to the user’s desktop, the approach to distribute and maintain the application installed on the user’s desktop brings other challenges (how do you install, update, patch the application that is deployed to 100s or 1000s of end user workstations?)

Development of web-based applications addressed some of the application distribution limitations or client/server systems, but initially, the user experience possible with early versions of HTML and lack of interactive capabilities in the earlier browsers seemed like we had taken several steps backwards, as web applications resembled more like dumb terminal based applications from years prior compared to what was possible with a thick client running on a PC and leveraging the native UI controls and features of the OS it runs on. As JavaScript evolved, richer features were added in HTML5, and JavaScript frameworks evolved to help the developer develop richer, more interactive browser based experiences without the need to write code to manipulate the browser’s DOM by hand. Fast forward to today, and it’s arguable that we’ve reached an equivalent position where a web-based application can equal the user experience features that were previously only possible on a native application running directly on an OS platform.

Where am I going with this?

This was an overly simplified and not entirely historically complete summary of the evolution of IT systems development over the past 50 years or so. The point I want to make is, regardless of whether we develop software for mainframes, desktop applications, client/server systems, web-based applications, mobile apps, or cloud-based apps, the approach we use to develop software today, the process of writing code, has not changed much in over 50 years:

We type code by hand using a keyboard. Typing every letter of the source code, letter by l.e.t.t.e.r.

The only arguably significant changes are that we no longer develop software by plugging wires into different terminals on a plugboard, or by punching cards and stacking them in a card reader to load a program into memory. These minor differences aside, we’ve been busy typing source code into computers using a keyboard for the past 40 years or so. Even our current IDEs, our Visual Studios, Eclipses and Netbeanses, are not that different from the editors we used to use to develop our Borland Turbo C in the 80s/early 90s. Just as our deployment approaches cycle round and browsers have become our new (internet) terminals, for some reason in the front-end development world developers are obsessed with text editors like Sublime Text, Atom, Brackets, and the newcomer from Microsoft, Visual Studio Code, or even for the real programmers, Emacs and vi/vim, shunning the more feature packed IDEs. I realize this is an over-exaggeration to make a point about how we still code with text editors – in reality today’s text editors with syntax highlighting and code complete features are arguably far closer to IDEs than to text editors at this point, but hey, we’ve always boasted that real developers only code in vi or Emacs, right?

More Productive Developer Tools?

At various points in the past, there have been developer tools that you could argue, were far more productive than the current IDEs like Eclipse and Netbeans that we use today for Java development. And yet, for many reasons, we chose to continue to type code, letter by letter, by hand. Sybase’s PowerBuilder, popular in the mid 1990s, was an incredibly productive development platform for building client/server applications (I did a year of PowerBuilder development in 1997). Why was it more productive? To oversimplify, to build a database backed application with CRUD functionality (create/retrieve/update/delete) you pointed the development tool to your database schema, visually selected columns from tables that you wanted to display on the screen, and it generated a GUI using a UI component called a DataWindow for you, also allowing you to drag and drop to customize the display as needed. Sure, you would code additional business logic in Powerscript by hand, but the parts that we spend so much of our time building by hand with today’s tech stacks and tools was done for you by the development tool.

Other variations with support for this type of visual programming have appeared over the years, like IBM’s VisualAge family of development tools which was available for many platforms and many programming languages, which provided a visual programming facility where you graphically dragged links between components that represented methods to be executed based on some condition or event.

Interestingly, many of the features of VisualAge Micro Edition became what is now known as Eclipse (I find that particularly interesting as a Java developer having used Eclipse for many years, and also having used in my development past VisualAge Generator and VisualAge for Java at different companies. I even still have a VisualAge for Java install CD (not sure why, but it’s still on my shelf):

More recently we’ve had interest in Model Driven Development (MDD) approaches, probably the most promising move towards code generation. For those that remember Rational Rose in the mid 1990s and it’s ability to ’roundtrip engineer’ model to code and code back to model, it does seem like we’ve been here before. When the topic of code generation comes up, I remember one of my college lecturers during a module on ‘Computer Aided Software Engineering’ (CASE), stating that in the future, we would no longer write any code by hand, all code will be generated using CASE tools using models. This was in 1992.

24 years later, we’re still writing code by hand. Using text editors.

I don’t have an answer to my initial question of this post, why haven’t our development tools advanced, but at least we’ve moved on from punch cards. But who knows. Maybe someone is about to release a card reader for developing JavaScript. If anything though, the reason could be because developers love to code. Take away the code, and you remove the need for developers. Maybe that’s not a bad thing. Maybe to evolve our industry we have to stop encouraging coding by hand and start to encourage a conscious focus on other techniques like MDD. As long as you have talented and passionate developers who love to code, you will have code. I don’t think code is going to go away any time soon. In the near future we may be spending more of our development efforts building systems that can evolve and learn by themselves (machine learning), but someone still has to program those systems. Coding by hand is still likely to be around for a while.

Now stop reading and go write some code 🙂

Have you been asked a ‘FizzBuzz’ question in a technical interview?

This question popped up in my daily email digest from Quora:

Are there really programmers with computer science degrees who cannot pass the FizzBuzz test?

Ok. I’ve been a professional software developer for 22 years (not including hobbyist coding before graduating and starting work) and I have a Computer Science degree. What is this FizzBuzz test? Now I’m curious. I admit it’s been a while since I’ve been for a job interview, but I’m aware of the current trend to ask interview questions to determine a candidate’s ability to solve problems. ‘FizzBuzz’ refers to a school game where you as a group you take turns in counting, but replace ‘Fizz’ if the current number is divisible by 3, ‘Buzz’ if divisible by ‘5’, or ‘FizzBuzz’ if divisible by both (or something similar, along those lines).

So the interview question is: write code that counts from 1 to 100, and outputs the number, or Fizz, Buzz, or FizzBuzz according to these rules.

The point of the question is not to write the most elegant, or brilliantly clever solution to the problem, the point of the question is to establish whether the interview candidate can write code demonstrating elementary software development concepts such as iteration and boolean logic conditions. If you’ve ever conducted technical interviews before then you won’t be surprised that you will come across candidates who say they have x years of experience in coding in language XYZ, but it’s pretty obvious that they can’t code anything at all. Jeff Atwood commented on this in a post here.

My first attempt at writing this straight up with little thinking, I wrote this:

Once I’d written this, it concerned me that I had the mod 5 test twice, so I tried to optimize the logic to not repeat this test, and my second attempt resulted in code that didn’t write out the correct expected results. I think this is the second part of the point of this question – it’s a puzzle where it appears there should be an elegant solution, but there really isn’t. Gayle McDowell’s answer to this question on Quroa calls this the ‘Smart Person’s Mirage’ – it seems like there should be a smart answer to the question, but there really isn’t.

Other than being lured in to try and answer this question myself since I’d never heard of this before, I guess the lesson here is that it’s not really the answer that counts, it’s how you approach the problem, and the process you take to solve the problem.

mdm crashing after upgrade from Mint 17.3 to Mint 18 (and solution)

I recently upgraded my Mint 17.3 to Mint 18 using mintupgrade and following the instructions here, and unluckily had a powercut while in the middle of the upgrade. When I rebooted, some things had changed, like the logon screen had new background images, and the grub menu now said ‘Mint 18’, but as soon as I logged on, mdm crashed and there was a dialog saying that XWindows had crashed within seconds of starting. The popup dialog said to check ~/.xsession-errors, which contained this error:

initctl: Unable to connect to Upstart: Failed to connect to socket /com/ubuntu/upstart: Connection refused
syndaemon: no process found

A quick Google found this question with the same error message, and following the suggestion to run ‘sudo apt-get install cinnamon’ fixed my issues. I restarted mdm with ‘sudo service mdm restart’, logged on and eveything was good.

I’m not entirely sure how much of the upgrade completed, so re-running ‘mintupgrade upgrade’ again still prompted for a number of packages to be deleted or upgraded. Completed the upgrade, rebooted, and now everything looks good.

This could have been a lot worse, but luckily was able to recover with no noticeable issues so far. And Mint 18 looks great! (I like the new window animations!)

Docker Swarm node set up with Docker 1.12 RC on Raspberry Pi

I’ve been following through the steps in this article to set up a Docker swarm cluster on a pair of Raspberry Pis. Most of the steps work as-is from Mac OS, but for a few steps there’s a couple of variations.

This is most likely going to be part 1 of a few posts as I work through and get this working.

For example, to copy your ssh key to the Pis, instead of:

ssh-copy-id pirate@pi3.local

… you can do (from tip here):

cat ~/.ssh/ | ssh user@machine "mkdir ~/.ssh; cat >> ~/.ssh/authorized_keys"

To switch between remote hosts:

eval $(docker-machine env pi1)

where pi1 is the name of the remote host.

To switch back to the localhost (not entirely obvious but found the answer here):

eval "$(docker-machine env -u)"

After ‘swarm init’ and adding the other nodes to the cluster, ‘docker swarm ls’ lists the nodes in the cluster:

More to come in part 2:-)

Installing kernel headers for Oracle Linux 6 on VirtualBox

The usual reason for Guest Additions failing to install on a Linux guest on Virtual Box is that the kernel headers are missing. How you install these or where they come from varies from distro to distro, although they’re usually available via the package manager on that distro.

I had an Oracle Linux 6 guest installed, Guest Additions (for video drivers, shared folder, clipboard sharing) was all working, and then at some point I started it up again and it was no longer working and wouldn’t re-install either. Seems like I’d picked up an update, and I needed to update the kernel headers too.

This post covers the steps needed. On OE6 before installing the Guest Additions, just run ‘yum install kernel-uek-devel’ and you should be good to go (assuming you’re booting with the ‘unbreakable kernel’ and not the RHEL compatible kernel)