Configuring Grails on Ubuntu Linux

I’ve recently installed Ubuntu Linux on my laptop, and in order to get Grails running, had to do the following:

  1. Edit /etc/environment and set up environment vars for Groovy, Grails and Java:
    <code>
    GRAILS_HOME=/INSTALL_DIR/grails-0.2
    GROOVY_HOME=/INSTALL_DIR/groovy-1.0-jsr-05
    JAVA_HOME=/usr/lib/jvm/java-1.5.0-sun-1.5.0.06
    export GRAILS_HOME GROOVY_HOME JAVA_HOME
    PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/bin/X11: /usr/games:$GRAILS_HOME/bin:$GROOVY_HOME/bin"
    </code>

    This file was new to me on Ubuntu, I’m used to defining env vars in a .profile file. In order to get the new vars setup, source the file – ‘source environment’.

  2. After the previous step, the included script files with Grails needed to be changed to executable in order to run. cd into the grails directory, and then ‘chmod +x’ for both INSTALL_DIR/bin/grails and INSTALL_DIR/ant/bin/ant

To Layer, or not to Layer? That is the Architectural Question

It’s a commonly accepted practice when designing large scale enterprise applications (or even smaller applications for that matter) to layer your architecture code as well as your application code. ‘Separation of Concerns’, for example separating your business logic from your data access logic, or your presentation logic from your business logic, is a common technique for ensuring that your application code is well defined in terms of it’s responsibilities. This approach give you code that is easier to maintain and easier to debug – if you have a problem with the data access then you know to look in code in your data access layer.

In Java technology based applications it seems like we have become the masters of taking this approach to the nth degree. A typical J2EE web-based system might comprise of:

  • Presentation tier:
    • coded using JSP pages
    • an MVC framework like Struts, to loosely couple the page navigation, control of the page navigation and interaction with adjacent layers: ActionForms as containers for submitted data, Actions for controlling the navigation between pages (using stuts-config.xml) and interface to the next adjacent layer
  • Business layer facade: an additional layer to decouple the presentation tier from the business layer technology, to avoid coupling the Struts Actions directly to Session Beans
  • Business component layer: implemented using Stateless and Stateful Session beans
  • Business layer: the actual business logic itself. Not directly coded inside the Session beans so it can be reused elsewhere where Session Beans are not deployable
  • Data Access Layer: code to interact with the database, providing data retrieval and storage facilities.

At its simplest, maintaining a separation between presentation, business and data access seems to be the minimum degree of logical layering you should require in your system.

So why in a typical J2EE application have we ended up with so many more layers? It seems like we’ve becoming obsessed with decoupling the base technologies that we’re using to build our J2EE systems, which is increasing the amount of code we have to develop, and rather than simplifying application development, and done more to complicate our architectures.

My reason for thinking about this right now is because I started to use PHP to put together some simple database driven web pages for some pages that interact with my weather monitoring station (http://www.kevinhooke.com/weather). In all the PHP books that I’ve looked at so far, I haven’t seen any mention of layering my system, or separating my presentation logic from my data access logic. Instead, the general approach for PHP seems to encourage data access from your presentation pages. From being in the frame of mind where I am encouraged to ‘separate, separate’, ‘build more layers!’, this is a refreshing change – I can develop pages in a fraction of the time than it would take me with a typical J2EE layered approach! Why? Because I don’t have to develop additional plumbing code to interact between my many layers.

In the majority of cases in heavily layered applications, simple data access requires justs ‘call throughs’ from one layer to the next in order to access the database and bring back the data you need – the additional layers you must call through in order to get to the database don’t add any additional functionality (of course this is not always the case, but in simple cases this is the case).

So when should you layer? I still believe in the benefits of layering applications, but I think the benefits of developing easier to maintain, well layered code comes at the cost of development time and additional effort to write the additional code required in each layer. Small web-based applications, such as a forum application, may not benefit from this additional overhead of layering the applicaiton. However, I cannot imagine working on a large development effort with a medium to large development team (> 50 developers) and with hundreds of front end pages, without having well defined layers between logical responsibilities in the system – it just wouldn’t work.

As a development community though, we need to spend time thinking about how we can maximise the benefits gained from approaches such as architectural and application layering, while reducing the overhead of this type of approach, somehow avoiding the overhead of having to write the additonal code calling through from layer to layer. I haven’t spent much time with Ruby on Rails, for example, but from what I can see with their approach they have maintained the design patterns typical in J2EE applications, but have replaced the need for the developer to spend so much time writing plumbing code – this is handled pretty much by the framework itself. This is where I believe we need to heading.

Avoiding excessive overtime in Software Development

News.com have an article on their site today stating that software developers are starting to work shorter, more normal work weeks (40 hours a week), instead of the 50 – 80 or even more that was not uncommon in the late 90s, early 2000s.

This is a great trend, not because I don’t love developing software, because I do, but I have plenty of other things I would rather be doing outside of work, like spending time at home with my wife and pets, and on my hobbies. After all, the main motivation to work for the majority of people is probably a) to pay the bills, and b) to earn money to spend on having a good time, whatever that may be – travelling, hobbies, eating out, going to movies etc.

The problem is, if you spend 12 hours in work every day, where is the free time to spend on the activities outside of work that you enjoy doing? And if you are working to earn money to enable you to participate in activities that you enjoy doing outsie of work, then when do you have any free time to take part in these activities? In extreme circumstances you might as well not be working at all because you end up with zero free time.

Software development is not a production line, or some machine that produces lines of code. If you can produce 40 lines of well written tested, working code in an 8 hour day, increasing the working hours to 16 hours a day is not going to get you 80 lines of code from each developer – software development just does not work like that. Software development is a cerebral exercise, one that takes a lot of thought, experiment, and dare I say it, creativity. Spending more time in one day on a task is not going to result in the task being completed sooner. Why? Because people get tired. On average, the average human attention span is something like 20 minutes – after that point the mind starts to wander, people need to take frequent breaks, tea/coffee/smoking breaks, and then start again on the task feeling more focused.

This is a point that project management and some project leadership just do not understand, and I cannot understand why, since most software development management were themselves programmers at some point.

The classic example that has happened to almost all of us I am sure is the late night debugging on a problem that you just can’t work out. Eventually you call it a night after having spent several hours on the one task. The next morning you come in, look at the code, and instantly find the problem in the code. Why? Because after working 10 hours plus you are tired, you are not focused, you are probably restless from sitting still for such a long time, you’re late for getting home for dinner, and there’s countless other chores nagging in the back of your head that you know you need to get home to do – in essence you are not focused on the problem. You go home have a good rest, and now the next morning the problem seems obvious. It’s not that it is now obvious, it just that you are now addressing the problem when you are at your freshest and most focused.

One of the other side-effects of working longer hours is when people get tired they tend to make more mistakes, or make judgements or decisions that they wouldn’t normally make if they were more alert and focused. So what happens after a late nate programming session? You come in the next morning and the first thing you do is you spend a couple of hours fixing the bugs from the previous night, or even reworking the code because what you wrote when you were tired is not the most effective solution to the problem.

So from spending an extra 4 hours at the end of the day writing some poorly written code and introducing a few bugs her and there, you waste a few hours the next morning correcting the mess from the previous night. This does not sound like an effective use of time to me. Over the longer term if your team is continually working long hours for 2 weeks or more, the problems become more severe – your team becomes demoralized and just generally worn out – people are not machines – to work effectively people need time to recouperate and rest in order to work effectively.

One of the rules of Extreme Programming is that no-one shall work more than 40 hours a week. This is not a dream – it’s realistic. If you demand more from your staff you often end up with less.

In software development overtime is never the solution to a problem – overtime is always symptom of a wider problem that needs to be addressed. If you don’t look for the wider problem, identify and address the issue, then the root cause of why you find yourself in the situation of having to ask you team to work overtime will not go away by itself – you’ll find yourself instead demanding more and more from your team which inevitably will result in lowering the morale of your team. Prolonged periods of overtime (ie more than 1 week in a row) will always have a far greater negative effect on the team and on productivity as a whole, than any short term positive gains.