Archives: July 2005

Windows disk recovery with CD-booting Linux

I made the ultimate mistake as a Windows user a few months ago… I moved a hard disk with XP already installed into a new machine, with a different motherboard and CPU. What a mistake. I can’t believe the trouble it caused with XP – it became so erratic and unusable that it was just that… unusable. Sure, it would boot up… in about 5 minutes, but even after it had booted it would take forever loading the anti-virus, firewall, etc etc.

Linux on the otherhand on the same drive (dual boot using Partition Magic/Boot Magic), couldn’t care less about begin moved to another machine, and booted just as it always has done… no problems what so ever. I think Microsoft need to rething their Product Activation, and restricting hardware changes etc, because it just adds pain to the user.

Anyway, after running many registry fixers, optimizers and anything else I could find, I ended up with a dead install of XP, and it would no longer boot. And on top of this at some point the drive diag software (it was a Quantum 40GB drive) starting reporting drive errors and I couldn’t reformat the primary partition so I couldn’t even reinstall XP.

This would normally have been the end of the road for whatever I had on this drive, but I was saved by being able to boot Linux from a CD (Knoppix, the STD distro), and copy any files from the good partitions to my USB hard drive.

(Page views: 70)

To Layer, or not to Layer? That is the Architectural Question

It’s a commonly accepted practice when designing large scale enterprise applications (or even smaller applications for that matter) to layer your architecture code as well as your application code. ‘Separation of Concerns’, for example separating your business logic from your data access logic, or your presentation logic from your business logic, is a common technique for ensuring that your application code is well defined in terms of it’s responsibilities. This approach give you code that is easier to maintain and easier to debug – if you have a problem with the data access then you know to look in code in your data access layer.

In Java technology based applications it seems like we have become the masters of taking this approach to the nth degree. A typical J2EE web-based system might comprise of:

  • Presentation tier:
    • coded using JSP pages
    • an MVC framework like Struts, to loosely couple the page navigation, control of the page navigation and interaction with adjacent layers: ActionForms as containers for submitted data, Actions for controlling the navigation between pages (using stuts-config.xml) and interface to the next adjacent layer
  • Business layer facade: an additional layer to decouple the presentation tier from the business layer technology, to avoid coupling the Struts Actions directly to Session Beans
  • Business component layer: implemented using Stateless and Stateful Session beans
  • Business layer: the actual business logic itself. Not directly coded inside the Session beans so it can be reused elsewhere where Session Beans are not deployable
  • Data Access Layer: code to interact with the database, providing data retrieval and storage facilities.

At its simplest, maintaining a separation between presentation, business and data access seems to be the minimum degree of logical layering you should require in your system.

So why in a typical J2EE application have we ended up with so many more layers? It seems like we’ve becoming obsessed with decoupling the base technologies that we’re using to build our J2EE systems, which is increasing the amount of code we have to develop, and rather than simplifying application development, and done more to complicate our architectures.

My reason for thinking about this right now is because I started to use PHP to put together some simple database driven web pages for some pages that interact with my weather monitoring station (http://www.kevinhooke.com/weather). In all the PHP books that I’ve looked at so far, I haven’t seen any mention of layering my system, or separating my presentation logic from my data access logic. Instead, the general approach for PHP seems to encourage data access from your presentation pages. From being in the frame of mind where I am encouraged to ‘separate, separate’, ‘build more layers!’, this is a refreshing change – I can develop pages in a fraction of the time than it would take me with a typical J2EE layered approach! Why? Because I don’t have to develop additional plumbing code to interact between my many layers.

In the majority of cases in heavily layered applications, simple data access requires justs ‘call throughs’ from one layer to the next in order to access the database and bring back the data you need – the additional layers you must call through in order to get to the database don’t add any additional functionality (of course this is not always the case, but in simple cases this is the case).

So when should you layer? I still believe in the benefits of layering applications, but I think the benefits of developing easier to maintain, well layered code comes at the cost of development time and additional effort to write the additional code required in each layer. Small web-based applications, such as a forum application, may not benefit from this additional overhead of layering the applicaiton. However, I cannot imagine working on a large development effort with a medium to large development team (> 50 developers) and with hundreds of front end pages, without having well defined layers between logical responsibilities in the system – it just wouldn’t work.

As a development community though, we need to spend time thinking about how we can maximise the benefits gained from approaches such as architectural and application layering, while reducing the overhead of this type of approach, somehow avoiding the overhead of having to write the additonal code calling through from layer to layer. I haven’t spent much time with Ruby on Rails, for example, but from what I can see with their approach they have maintained the design patterns typical in J2EE applications, but have replaced the need for the developer to spend so much time writing plumbing code – this is handled pretty much by the framework itself. This is where I believe we need to heading.

(Page views: 86)

Father of Atari and Pong to launch Restaurant/Entertainment restaurants

Nolan Bushnell, the original founder of Atari and the classic game, Pong, is planning on opening a restaurant chain for 20-somethings focusing on bringing together food and electronic entertainment.

Bushnell already hasd experience in the restaurant/entertainment business, as he also founded the ‘Chuck E. Cheese’s’ restaurant chain for pre-teens.

This new venture, called uWink, will be using table-top game machines, presumably so you eat while you play…

(Page views: 64)

Microsoft’s Virtual Earth worlds away from Google Earth

Microsoft’s Virtual Earth service was launched today, but the service is worlds away from what you can find at Google Earth.

I tried browsing around several places, but the map scrolling is clunky and slow, several tiles fail to load, and when I zoom in on areas in California they don’t have any aerial photos of the areas I was looking at.

This service is pretty poor compared to TerraServer that they used to run, despite that it’s satellite imagery was black and white. I guess it’s early days, but Google Earth is still the ultimate in this area.

(Page views: 76)

Sony PS3 predicted to have over 60% market share by 2012

Market research company Strategy Analytics are forecasting that the PS3 will have 61% market share of the next generation console market by 2012, despote the fact that Microsoft will be releasing the XBox360 some six months earlier, and in time for Christmas 2005.

Both Microsoft and Sony will be making a loss on their new consoles in an attempt to reach an attractive price point, but the PS3 is expected to be the more expensive of the two new consoles, possibly starting at around $400.

Despite not being released until Spring 2006 and being more expensive than the XBox360, which rumours say will be released in November 2005, Strategy Analytics still think the PS3 will gain the major share of the market.

(Page views: 84)

Microsoft following Google Earth – launch ‘MSN Virtual Earth’

It seems that recently Microsoft can only follow and immitate the current innovators, rather than getting out to the market first with new products. But after all, this is how Microsoft have always made their money, ever since buying the rights to QDOS from Seattle Computer Products in 1980, and licensing it to IBM as MS-DOS for the first IBM PC.

Microsoft has announced their own version of satellite image browsing, a la Google Earth (technology which Google gained from acquiring Keyhole), called MSN Virtual Earth.

To give Microsoft credit, they actually had a very innovative service a few years ago, not unlike Keyhole/Google Earth today, called TerraServer, which allowed the user to browse black and white satellite imagery and purchange prints online. This Microsoft site has an interesting history of the project, which was I believe a project to demonstrate the use of Microsoft SQL Server operating with over a terrabyte of online data. What’s interesting, is that at the time the system was being put together (in the late 90’s), in order to acheive a terrabyte of storage, the system was using over 300 9GB hard drives. I don’t think it will be too much longer before terrabyte drives will be available to home users in desktop PCs…

(Page views: 68)