Industries that build their own tools

Here’s an interesting observation, and maybe something you haven’t thought of before. If you work in IT, particular in software development, the tools that you use as a developer were built by other developers.

Think about that for a minute. Can you imagine if chefs built their own ovens, or doctors made their own medicines?

Is there any other industry that builds it’s own tools?

The value of simplicity in design

As a software developer I can tell you for sure that I am not a User Experience (UX) expert, but I can give you an opinion from experience on what looks good or what is easier to use verses something that is not.

What ‘looks good’ is obviously a highly subjective point of view, but what ‘works well’ or is easy to use is (again from my lack of experience as a UX expert) I assume easier to measure with a variety of metrics (think time spent on page, time spent looking for x on page, time between related clicks/keypresses, number of required navigation steps to get from x to y etc).

The reason for this post is that I’ve come across this quote a few times in multiple places over the past few days:

“… perfection is attained not when there is nothing more to add, but when there is nothing more to remove”

(from French author, Antoine de Saint Exupéry)

In software development at the lowest level, there’s a number of guiding principals that are related to simplicity, for example ‘do one thing and do it well’, with the intent that a method or a class that does one thing is far easier for a developer to understand and maintain. The current trend for microservices caries this concept through to how you structure, package and deploy your system, not as a monolithic single deployment unit, but as many smaller services that ‘do one thing and do it well’.

Bearing all the above in mind, I’ve always found it interesting how Apple clearly follows these design principals in OS X by simplifying the user experience from release to release. Even their hardware products clearly ‘do one thing and do it well’. The iPod was a perfect example of this – it was a music player and nothing else. On the other hand, Microsoft tend to take a different approach and seem to be on a never ending quest to continually add more features, with each release becoming more feature rich, more complex and more difficult to use. Think of any of the MS Office apps and the dizzying array of features each app has, and the amount of time you spend looking for the one feature that you know must be in there somewhere but it’s never in the ribbon bar where you think it should be. Presumably at some point Microsoft does usability testing with typical users – at what point though do they decide that the UX design is good or acceptable? Maybe I’m not a typical user, but if asked to give an example of a well designed and easy to use application, any Microsoft application would definitely not be first to come to mind. Apple don’t escape unscathed here either – some of the recent design changes in iTunes over the past few releases have boggled my mind (related options in different places, some on the far left of the app and some on the right).

I don’t think I had any particular point to make in this post, so I’ll leave with a link to one of my favorite UX books (again, not as an expert in this area). If you’re designing the UI for an application, please, Don’t Make Me Think.

Uncle Bob: “Make the Magic Go Away” – why you should learn some Assembly

I’ve been spending some spare time learning some ARM Assembly (and sharing from of my experiences here, here and here).

In the early 90s at college I did a module on 68000 Assembly on the Atari ST, but I haven’t done any since. I remember being amazed at how complicated and it was to implement even the most simplest of code, since you’re dealing with a very limited set of instructions, using instructions that the CPU itself understands. At the same time though you gained an insight into what goes on under the covers, how the computer itself works – how the CPU’s registers are used, and how data is transferred from registers to memory and vice versa. It’s computing at it’s most elemental level, you’re working with the bare metal hardware.

Since I’ve also recently been playing around with random stuff on the Raspberry Pi, I thought I’d take a look at the ARM CPU and learn some ARM Assembly. I felt a need to get back to basics and learn about the architecture of ARM CPUs and what makes them tick. As much as this sounds pretty hardcode and crazy, ARM CPUs are showing up pretty much everywhere and you probably don’t even know it. There’s a good chance at least one if not more of you mobile devices you currently and have owned over the past few years has been powered by an ARM CPU. So given the memory and CPU contraints of small form factor devices, and also IoT type devices, it’s not completely off the wall to be interested in learning some ARM Assembly.

Anyway, back to my original point. If you want to understand what makes a computer tick (literally), you can’t go far wrong by learning some Assembly. You’ll get a far better understanding of what goes on under the covers, and a new appreciation of just how much abstraction there is in today’s high level languages (Java, C#, Objective-C etc) – how much they do for you without you even really have to know what’s going on under the covers. But if you really want to get a deeper understanding, you lift the hood/bonnet and start poking around in the engine, right?

It surprised me when I came across this post by Uncle Bob recently:

http://blog.8thlight.com/uncle-bob/2015/08/06/let-the-magic-die.html

Bob comments on the continual search within the industry to find the perfect language or library. We’re continually re-inventing languages and frameworks, but there’s really nothing revolutionary different being ‘invented’ – they’re all solving the same problems, and not really offering anything new. Bob even goes as far to say there really hasn’t been anything new in computer languages for 30 years.

The unusual thing is that we seem to get caught up in the promise that maybe the next big language or framework ‘solves all problems’ and does it better than all other languages and frameworks before, but there’s still really nothing new.

Bob’s point:

But there is no magic. There are just ones and zeros being manipulated at extraordinary speeds by an absurdly simple machine. And that machine needs discrete and detailed instructions; that we are obliged to write for it.

And continues:

I think people should learn an assembly language as early as possible. I don’t expect them to use that assembler for very long because working in assembly language is slow and painful (and joyous!). My goal in advocating that everyone learn such a language is to make sure that the magic is destroyed.

And here’s it is, the reason why you should learn Assembler:

If you’ve never worked in machine language, it’s almost impossible for you to really understand what’s going on. If you program in Java, or C#, or C++, or even C, there is magic. But after you have written some machine language, the magic goes away. You realize that you could write a C compiler in machine language. You realize that you could write a JVM, a C++ compiler, a Ruby interpreter. It would take a bit of time and effort. But you could do it. The magic is gone.

I don’t know exactly what prompted me recently to start learning Assembler, but these comments from Uncle Bob resonated with me. If you don’t know how a computer works, how do you expect to understand what is going on when you develop code to run on it?

So there you go. Bob said it. Go learn Assembler. Maybe you’ll learn something.

 

The Unavoidable Compromise of Business Driven Development

Given enough money, time, experience,  technical experience and creative input, time has shown as an industry we can build awesome things. Unless you’re working on a self-funded project with unlimited supplies of cash and time, it’s unlikely that most of us will ever have the experience of working with minimal or no resource constraints.

Software development in ‘the real world’ is really no different from any other business, and the concept of the Triple Constraint has been well understood in Project Management for some same. This describes the inter-relationship between 3 attributes of:

  • schedule
  • scope
  • cost

and how they interact to affect quality of the final product. At a high level, it’s generally understood that you can have ‘any two’ of these, but it’s impossible to have all three at the same time. Each of these attributes translates to desirable qualities of:

  • fast (deliver the product in less time)
  • good (include all desired features)
  • cheap (deliver at low cost)

So, you can have fast and good but it won’t be cheap, or you can have fast and cheap but it won’t meet all your requirements (some features will have to be left out).

So back to the original topic. How is software development a compromise? Invariably because your client or your company wants all these things: “we want it developed in an impossibly short amount of time (get it ready for tomorrow), we want this massive list of features (and no, we’re not prepared to leave any out), and oh by the way, we only have enough money to pay for 1 developer to work for 8 hours”.

While technology can go some way to helping produce more for less (code generation etc), the reality is that software development in the real world is not a technical problem. It is a business problem of negotiating contracts and managing expectations. For the technologist, this is the continual struggle – pretty much everything you work on will be under less than ideal conditions.

Business Driven Development (BDD). Welcome to the Real World.