Never assume you know how something works by only observing its external behavior

As a developer, you should never assume you understand how something works solely by observing what it does. This is especially true if you are trying to fix something and your only understanding of the issue is only the behavior that you can observe.

While you don’t have to understand how something works in order to use it, if you’re trying to fix something, especially software, it helps to understand how something works. The reason is what you observe externally as a problem is usually only a symptom of the problem; it’s rarely the actual problem itself.

Let me give you an extremely simplified example. Let’s say you have an electric car, but you’ve no idea how the electric motor drivetrain works, you just know you press the accelerator pedal and it goes. One morning you get in the car and press the pedal and nothing happens. In diagnosing the issue, the only thing you consider is the external symptoms that you can see: you press the pedal and it doesn’t go. An extremely naive conclusion you could make is that the accelerator pedal is broken (!). So you replace the pedal, but then you’re surprised to find that it still doesn’t work (ok, so this is a contrived example to make the point – if you know enough to be able to replace the accelerator pedal, you probably know enough about how the car works to not assume the pedal is broken!)

As a software developer or architect, as you diagnose issues you should always look under the covers and find out more about what’s actually going on. The problem you’re looking for is rarely the symptom that you can actually see (or what the user sees).

Making the most from your time (or, working smarter, not harder)

‘Work smarter, not harder’ as the saying goes. For most people this is common sense. There’s many ways you can interpret this advice, from finding tools and techniques to help you automate repetitive tasks, to prioritizing tasks that have the highest contribution towards achieving a goal and depriortizing or even ignoring other less important tasks – why spend time on a task if it doesn’t help you get to your goal?

After spending a while writing this article I very nearly decided not post it, because don’t most people get this already, it’s kind of obvious, right? Well, it’s only obvious if you already get it, and maybe not so obvious if you don’t. The reason why I felt there’s value to share this is I’ve come across enough people during my years of working in Software Development who just don’t get it. If these people can gain even a smallest something from this post, then hopefully they’ll be saved from their misguided attempts to ‘do all the things’.

Deciding where to spend your time is like choosing where to invest your money. We each have a finite (and limited) amount of time to invest in working on tasks, as there’s only 24 hours in a day and at least some of those hours need to be spent sleeping. More often than not the amount of time you have available is less than the time to complete all your tasks (either at work or in our personal lives).

Who needs to sleep?

First, let’s get rid of the idea that spending more time working and less time sleeping is the answer to all problems. This is not going to help you in the long term. Reducing your hours of sleep so you can do more during the day is usually not a sensible approach, as in the long run you’ll just end up more tired, and working while tired means you’re less alert, less effective, and more likely to make mistakes. Obvious points to make, but some people just don’t get this. Spending more time working rarely means you complete more work. If anything, it may mean you get more done but at a lower quality and with more errors and mistakes. If that’s acceptable for you and/or your employer then great, but for most of us that’s not a sensible or effective option.

Automating Repetitive Tasks

As software developers, we’re in a much better position to be able to automate tasks than non-software developers. Have a manual task that is time consuming and you find yourself repeating it again and again? Write a utility to help you automate the task! Ok, so it’s not always as easy as that, and you might not have the free time to spend developing the automation. But if the amount of time invested up front to build the automation is going to free up time to work on other tasks in the future, then it might be worth the initial investment. Discuss with your supervisor if you’re unsure if you should be spending time on building automation rather than working on completing the task itself.

Developing a script or a standalone app to help automate tasks doesn’t have to be a major development effort either. Sometimes learning shortcuts with your exiting tools can be a major time saver. Here’s some quick ideas:

  • Learn some simple regular expressions for matching patterns, and how your text editor of choice uses them to do search and replace tasks. Manual and repetitive tasks like replacing some pattern of text in a file can be done in seconds if you can use some regex.
  • Moving or deleting a column of data in a tabular text file: most text editors have the ability to select a column of text with the mouse, rather than text spanning horizontal lines. This makes it easy to delete or move a column.
  • Learn a couple of approaches for how to work with multiple files in multiple subdirectories. This can often be combined with other approaches, like using a regex for a pattern matching and replacing text. If you have to apply an update across multiple files, knowing how to write a quick script to find all matching files in all subdirectories can be a massive timesaver.
  • Keep a directory of common scripts you’ve developed in the past so you can reuse them in the future. It’s often quicker and easier to reuse a previous script as a starting point and modify it, rather than start from scratch every time
  • If your scripts are generic and can be shared (and do not contain company proprietary information, and/or your company allows you to share source code publicly – always check if you’re unsure), share reusable snippets either as Github Gists, or share as a public GitHub project, or similar code sharing sites like Pastebin. This allows you to build up a library of useful scripts, and also allows others to benefit from your work too.
  • Use the opportunity to write a script to automate a task as an opportunity to learn a new language. Learn some JavaScript and node.js, Groovy, or [insert language you’ve been meaning to learn here]
  • Never underestimate the tools you have right at your fingertips. If you’re lucky enough to be developing on a *nix platform, your shell has a myriad of tools that you can pipe together to complete a more complex task, e.g. find -exec, sort, grep, awk, sed, wc -l, and bash scripting in general to build something more complex than a single one liner

Prioritizing your tasks

In general, most of us in our jobs have some tasks or responsibilities that are core to our role, things that we have to or are expected to do, and there’s most likely an assortment of other tasks that are not essential or time critical, and maybe don’t even contribute towards achieving whatever your core responsibility is. These other non-essential tasks may come up from time to time, they might be ‘nice to haves’, they may come up in conversations as improvement type side projects. These additional tasks may also be longer term skills development type activities like mentoring or knowledge transfer type tasks. In the long run these add value to your organization as a whole, but they might not contribute to getting a product built and shipped out the door today.

If we had to categorize each task in terms of importance, we could think of a number of varying scales along where we could put each task:

  • essential to achieving core role responsibilities, verses non-essential
  • short term tasks verses longer term tasks
  • tasks that result in short term/immediate gains (quick wins), verses longer term gains (strategic investments), for either you or your company
  • high effort, verses low effort

Think about the tasks you have, and try to prioritize them in a way that you can (if possible) spend time on tasks that have either higher short or long term gains, and de-prioritize other non-essential tasks.

If you really can’t get away from doing ‘all the things’ yourself, at least try and automate as much as you can so you can free up time to work on tasks that have greater rewards or impact.

‘Automate all the things’, not ‘do all the things’!

Why hasn’t the evolution of software development tools kept pace with other changes in the development of software systems?

Many changes in IT occur as a indirect result of the development or introduction of some other technological change. Not all of these changes involve development or invention of a completely new technology, some changes are related to increased availability or reduced cost of a resource that was already previously available, but reduced costs allow for example making use of commodity, off-the-shelf x86/AMD64 servers to build a system compared to buying previously more expensive, branded systems from a big name supplier (e.g. IBM or Oracle).

Some changes in development of IT systems appear to come and go more like fashions in clothing; what is today’s hot new trend is tomorrow’s old news, but some trends seem to come back in fashion again at some point in the future. IT trends are less like a linear timeline of incremental improvements, they’re more like a churning cycle of revolving ideas that gain popularity and then fall out of favor as we strive to find what works versus what doesn’t, what’s more efficient, or what’s more effective.

As an example, computer systems in the 1960s and 70s were mainly centralized systems, where computing resources were provided by hardware in a central physical location, at this time usually a mainframe, and accessed remotely by users via terminals. The terminal device had no or little processing power itself, all computing resources were provided by the centralized system.

After introduction of the IBM PC and clones, computing resources became available on the user’s desk rather than locked up in the computer room. This allowed development of systems where part of the processing could be provided by an application running on the user’s desktop, and part provided by backend resources running remotely. This application style is called client/server.

Every architectural decision has pros and cons – while client/server systems reduced need for centralized processing resources as some processing is off-loaded to the user’s desktop, the approach to distribute and maintain the application installed on the user’s desktop brings other challenges (how do you install, update, patch the application that is deployed to 100s or 1000s of end user workstations?)

Development of web-based applications addressed some of the application distribution limitations or client/server systems, but initially, the user experience possible with early versions of HTML and lack of interactive capabilities in the earlier browsers seemed like we had taken several steps backwards, as web applications resembled more like dumb terminal based applications from years prior compared to what was possible with a thick client running on a PC and leveraging the native UI controls and features of the OS it runs on. As JavaScript evolved, richer features were added in HTML5, and JavaScript frameworks evolved to help the developer develop richer, more interactive browser based experiences without the need to write code to manipulate the browser’s DOM by hand. Fast forward to today, and it’s arguable that we’ve reached an equivalent position where a web-based application can equal the user experience features that were previously only possible on a native application running directly on an OS platform.

Where am I going with this?

This was an overly simplified and not entirely historically complete summary of the evolution of IT systems development over the past 50 years or so. The point I want to make is, regardless of whether we develop software for mainframes, desktop applications, client/server systems, web-based applications, mobile apps, or cloud-based apps, the approach we use to develop software today, the process of writing code, has not changed much in over 50 years:

We type code by hand using a keyboard. Typing every letter of the source code, letter by l.e.t.t.e.r.

The only arguably significant changes are that we no longer develop software by plugging wires into different terminals on a plugboard, or by punching cards and stacking them in a card reader to load a program into memory. These minor differences aside, we’ve been busy typing source code into computers using a keyboard for the past 40 years or so. Even our current IDEs, our Visual Studios, Eclipses and Netbeanses, are not that different from the editors we used to use to develop our Borland Turbo C in the 80s/early 90s. Just as our deployment approaches cycle round and browsers have become our new (internet) terminals, for some reason in the front-end development world developers are obsessed with text editors like Sublime Text, Atom, Brackets, and the newcomer from Microsoft, Visual Studio Code, or even for the real programmers, Emacs and vi/vim, shunning the more feature packed IDEs. I realize this is an over-exaggeration to make a point about how we still code with text editors – in reality today’s text editors with syntax highlighting and code complete features are arguably far closer to IDEs than to text editors at this point, but hey, we’ve always boasted that real developers only code in vi or Emacs, right?

More Productive Developer Tools?

At various points in the past, there have been developer tools that you could argue, were far more productive than the current IDEs like Eclipse and Netbeans that we use today for Java development. And yet, for many reasons, we chose to continue to type code, letter by letter, by hand. Sybase’s PowerBuilder, popular in the mid 1990s, was an incredibly productive development platform for building client/server applications (I did a year of PowerBuilder development in 1997). Why was it more productive? To oversimplify, to build a database backed application with CRUD functionality (create/retrieve/update/delete) you pointed the development tool to your database schema, visually selected columns from tables that you wanted to display on the screen, and it generated a GUI using a UI component called a DataWindow for you, also allowing you to drag and drop to customize the display as needed. Sure, you would code additional business logic in Powerscript by hand, but the parts that we spend so much of our time building by hand with today’s tech stacks and tools was done for you by the development tool.

Other variations with support for this type of visual programming have appeared over the years, like IBM’s VisualAge family of development tools which was available for many platforms and many programming languages, which provided a visual programming facility where you graphically dragged links between components that represented methods to be executed based on some condition or event.

Interestingly, many of the features of VisualAge Micro Edition became what is now known as Eclipse (I find that particularly interesting as a Java developer having used Eclipse for many years, and also having used in my development past VisualAge Generator and VisualAge for Java at different companies. I even still have a VisualAge for Java install CD (not sure why, but it’s still on my shelf):


More recently we’ve had interest in Model Driven Development (MDD) approaches, probably the most promising move towards code generation. For those that remember Rational Rose in the mid 1990s and it’s ability to ’roundtrip engineer’ model to code and code back to model, it does seem like we’ve been here before. When the topic of code generation comes up, I remember one of my college lecturers during a module on ‘Computer Aided Software Engineering’ (CASE), stating that in the future, we would no longer write any code by hand, all code will be generated using CASE tools using models. This was in 1992.

24 years later, we’re still writing code by hand. Using text editors.

I don’t have an answer to my initial question of this post, why haven’t our development tools advanced, but at least we’ve moved on from punch cards. But who knows. Maybe someone is about to release a card reader for developing JavaScript. If anything though, the reason could be because developers love to code. Take away the code, and you remove the need for developers. Maybe that’s not a bad thing. Maybe to evolve our industry we have to stop encouraging coding by hand and start to encourage a conscious focus on other techniques like MDD. As long as you have talented and passionate developers who love to code, you will have code. I don’t think code is going to go away any time soon. In the near future we may be spending more of our development efforts building systems that can evolve and learn by themselves (machine learning), but someone still has to program those systems. Coding by hand is still likely to be around for a while.

Now stop reading and go write some code 🙂

Identifying and responding to issues faster with more frequent deploy cycles

Everything in IT has pros and cons, and as they say, ymmv. One of the things that has always interested me in IT is how your point of view, the lens through which you perceive ‘how things are done’, is always (and perhaps obviously) influenced by your current company/project/client/organizational experience.

If for example your current experience is (and perhaps has always been) traditional waterfall-type development lifecycle, then the very idea that it’s possible to build/test/deploy new functionality to production in anything less than 6 month release cycles, let alone daily, probably sounds like an impossibility.

However, if this is your experience (and to be honest most of my industry experience has been using more traditional methods), there is a light bulb moment when you realize what it means to be able to deploy to production in very frequent, short development cycles.

The faster/quicker you can deploy new features to production, the faster you can learn about what works and what doesn’t. If you find problems sooner, you can respond and fix issues sooner. Fixing problems sooner means problems don’t get out of control and grow into much larger problems that become more expensive and more difficult to fix later. Getting features in front of your users sooner give them the opportunity to provide feedback quicker, and either confirm that this is what they were expecting, or whether changes need to be made to get closer to exactly what it is that they are looking for.

Continuous Delivery is not a myth, there’s many organizations out there already successfully using techniques to deploy to production daily (Infoq.com have a great collection of presentations on their site).