Maintainability (Part I)

Nov 11, 2009 13:19

I tend to end up in roles where new work is being performed. Maybe a new department is being created, or an existing department is undergoing an overhaul, or a new IT system has been introduced. Generally, and particularly in the latter case, this involves the creation of ad-hoc databases or reports.

This sort of thing crops up with depressing regularity: Often, during the scoping phase for the system or process, no-one thought about getting sensible data out of the system for the purposes of reporting. This means that the question of data analysis only crops up a couple of days after "go-live", when, inevitably, a contractor (or an internal non-specialist) is dragged in to pull information from the back-end database. And it's needed *now*, which means that it matters little how the demand is met.

This brings me to the topic of maintainability. In situations where a manager has less technical knowledge than the guy in charge of the new reporting "solution", and doesn't follow industry standards on best practice, we see these sorts of things:

i) Mass of impenetrable Excel reports with indecipherable VBA spaghetti code providing half the functionality.

ii) Reports saved in 23 different locations across local hard drives and network shares, no central record of where or what they do.

iii) Reports using SQL script that points at a direct copy of the system backend. Of course, once version 2.0 of the software comes along, the backend database changes and tens of thousands of man-hours' worth of work becomes either useless or in need of serious overhaul.

iv) Vastly costly (time-wise) manual data-maintainance procedures which only one person understands, or worse, local bespoke scripts designed to do the job.

and consequently...

v) Single points of failure who cost a great deal of money to maintain (and even more to replace).

For now I'll ignore the classic system-killing direct-backend SQL query, as it's outside of the scope of this post. The upshot of all of this is that in a situation where a new system is being introduced, it's essential to understand how management information will be obtained, and factor the development work around this into the project plan.

In anything resembling a serious project, ideally some form of proper data-warehouse ought to be designed and introduced, and set up with ETL routines that can be recreated quickly when the source database(s) change, so that reporting functionality isn't affected. Obviously the whole thing should be meticulously documented while it is in the design stages.

A decent reporting tool (Business Objects, for instance) can then be introduced to allow managers to have on-demand access to standardised reports.

All of this sounds utterly obvious, but it's amazing how unusual it can be. In my second post on this subject, I'll take a look at what other people have written around the subject, and at how it fits into ITIL.

management, itil, work, databases, reliability, maintainability, availability

Previous post Next post
Up