Many changes in IT occur as a indirect result of the development or introduction of some other technological change. Not all of these changes involve development or invention of a completely new technology, some changes are related to increased availability or reduced cost of a resource that was already previously available, but reduced costs allow for example making use of commodity, off-the-shelf x86/AMD64 servers to build a system compared to buying previously more expensive, branded systems from a big name supplier (e.g. IBM or Oracle).
Some changes in development of IT systems appear to come and go more like fashions in clothing; what is today’s hot new trend is tomorrow’s old news, but some trends seem to come back in fashion again at some point in the future. IT trends are less like a linear timeline of incremental improvements, they’re more like a churning cycle of revolving ideas that gain popularity and then fall out of favor as we strive to find what works versus what doesn’t, what’s more efficient, or what’s more effective.
As an example, computer systems in the 1960s and 70s were mainly centralized systems, where computing resources were provided by hardware in a central physical location, at this time usually a mainframe, and accessed remotely by users via terminals. The terminal device had no or little processing power itself, all computing resources were provided by the centralized system.
After introduction of the IBM PC and clones, computing resources became available on the user’s desk rather than locked up in the computer room. This allowed development of systems where part of the processing could be provided by an application running on the user’s desktop, and part provided by backend resources running remotely. This application style is called client/server.
Every architectural decision has pros and cons – while client/server systems reduced need for centralized processing resources as some processing is off-loaded to the user’s desktop, the approach to distribute and maintain the application installed on the user’s desktop brings other challenges (how do you install, update, patch the application that is deployed to 100s or 1000s of end user workstations?)
Where am I going with this?
This was an overly simplified and not entirely historically complete summary of the evolution of IT systems development over the past 50 years or so. The point I want to make is, regardless of whether we develop software for mainframes, desktop applications, client/server systems, web-based applications, mobile apps, or cloud-based apps, the approach we use to develop software today, the process of writing code, has not changed much in over 50 years:
We type code by hand using a keyboard. Typing every letter of the source code, letter by l.e.t.t.e.r.
The only arguably significant changes are that we no longer develop software by plugging wires into different terminals on a plugboard, or by punching cards and stacking them in a card reader to load a program into memory. These minor differences aside, we’ve been busy typing source code into computers using a keyboard for the past 40 years or so. Even our current IDEs, our Visual Studios, Eclipses and Netbeanses, are not that different from the editors we used to use to develop our Borland Turbo C in the 80s/early 90s. Just as our deployment approaches cycle round and browsers have become our new (internet) terminals, for some reason in the front-end development world developers are obsessed with text editors like Sublime Text, Atom, Brackets, and the newcomer from Microsoft, Visual Studio Code, or even for the real programmers, Emacs and vi/vim, shunning the more feature packed IDEs. I realize this is an over-exaggeration to make a point about how we still code with text editors – in reality today’s text editors with syntax highlighting and code complete features are arguably far closer to IDEs than to text editors at this point, but hey, we’ve always boasted that real developers only code in vi or Emacs, right?
More Productive Developer Tools?
At various points in the past, there have been developer tools that you could argue, were far more productive than the current IDEs like Eclipse and Netbeans that we use today for Java development. And yet, for many reasons, we chose to continue to type code, letter by letter, by hand. Sybase’s PowerBuilder, popular in the mid 1990s, was an incredibly productive development platform for building client/server applications (I did a year of PowerBuilder development in 1997). Why was it more productive? To oversimplify, to build a database backed application with CRUD functionality (create/retrieve/update/delete) you pointed the development tool to your database schema, visually selected columns from tables that you wanted to display on the screen, and it generated a GUI using a UI component called a DataWindow for you, also allowing you to drag and drop to customize the display as needed. Sure, you would code additional business logic in Powerscript by hand, but the parts that we spend so much of our time building by hand with today’s tech stacks and tools was done for you by the development tool.
Other variations with support for this type of visual programming have appeared over the years, like IBM’s VisualAge family of development tools which was available for many platforms and many programming languages, which provided a visual programming facility where you graphically dragged links between components that represented methods to be executed based on some condition or event.
Interestingly, many of the features of VisualAge Micro Edition became what is now known as Eclipse (I find that particularly interesting as a Java developer having used Eclipse for many years, and also having used in my development past VisualAge Generator and VisualAge for Java at different companies. I even still have a VisualAge for Java install CD (not sure why, but it’s still on my shelf):
VisualAge for Java from 1997, IBM Java CD from 1998, BEA Weblogic Process Integrator from 2000 #Java20 pic.twitter.com/t6iWCYu5sl
— Kevin Hooke (@kevinhooke) May 22, 2015
More recently we’ve had interest in Model Driven Development (MDD) approaches, probably the most promising move towards code generation. For those that remember Rational Rose in the mid 1990s and it’s ability to ’roundtrip engineer’ model to code and code back to model, it does seem like we’ve been here before. When the topic of code generation comes up, I remember one of my college lecturers during a module on ‘Computer Aided Software Engineering’ (CASE), stating that in the future, we would no longer write any code by hand, all code will be generated using CASE tools using models. This was in 1992.
24 years later, we’re still writing code by hand. Using text editors.
Now stop reading and go write some code 🙂