Author: Yogi Schulz

Three whining children, who had forgotten their ski goggles, reminded me how everything must work in harmony to achieve a useful result. Our children were skiing on great skis. They were wearing warm parkas, toques and mittens. We had bought expensive lift tickets. It was all for naught. Because it was snowing, our children were miserable without their goggles that they had left behind on our kitchen table. That one missing component overshadowed all the components that were working flawlessly.

We often experience comparable problems in IT. Great workstations aren’t much good if the LAN they connect to is a dog. A killer web application is a mega-dud without a functioning call center. {State-of-the-art desktop applications fizzle without reasonable tech support.}  These examples illustrate the unhappy outcome when systems integration fails. Unless IT is all working, it’s not working at all.

Successful IT demands that all the components are present and are interacting harmoniously. That’s much easier to say than it is to achieve. How can we assure our projects deliver harmonious integration or at least receive early warning of trouble with a particular component?  Here are three techniques.

Risk Assessment

Conducting a risk assessment at least once per phase will provide early warning of project difficulties before these grow into large ticking time bombs.

For example, the discovery of poor application performance in the development environment indicates a risk that the production environment may exhibit similar performance difficulties. That observation should trigger a review of the planned capacity of the production environment as a risk mitigation task.  The recommendations of the review in turn will reduce the probability of poor initial user experiences once the application goes live.

Surprisingly, many projects are conducted without any risk assessment. Often project teams skip over risk assessment because of the pressure to produce deliverables that form part of the expected system. {Some project teams, being painfully aware of their shortcomings, don’t want them further illuminated for everyone to see.}  Such shortsightedness is not the road to success.

When we take mitigating actions to reduce the major risks we identify, we significantly increase the probability that all the system components will function together harmoniously.

Methodology

Just mentioning the term systems development methodology generates undeserved yawns or skepticism. The value of a methodology is that it challenges the team to answer lots of questions. A team using a methodology will encounter fewer nasty surprises and execute its project with more discipline.

For example, How many concurrent users must the application service?  The answer to this question sets important scale parameters such as the number of servers to be installed, the required bandwidth of the WAN connection and the amount of disk that will be consumed.

To many stakeholders, using a methodology is seen as an IT-inspired conspiracy to slow down the project, increase costs to produce useless deliverables and engage in extended technical navel-gazing. This erroneous opinion is based on unfortunate experiences where the value of a methodology was badly explained or the methodology was poorly applied to a project. These experiences, unfortunate though they are, are not reasons to abandon the methodology in favor of a seat-of-the-pants approach.

The sum of the experience of the team members is almost never enough to ensure that all the questions that must be asked and answered for the project to succeed are in fact asked and answered. It’s pesky gaps in the questions that create the missing system component embarrassment. Using a systems development methodology will fill in the gaps and lead to harmony.

Benchmarking

Benchmarking a system’s characteristics, planned sizing and project estimates against similar projects being undertaken by others can provide valuable insights and early warning of potential misjudgments, long before roll-out begins, when there’s still plenty of time to take corrective action.

For example, arranging site visits through a key suppliers can put a team in touch with other organizations using similar technology and operating at a similar scale. {Alternatively, a team can identify potential organizations to benchmark through user groups such as SAPPHIRE for SAP software or OUAG for Oracle software.}

Some critics may view site visits as little more than a team junket or a boondoggle. Others are likely to point to material differences among the projects being benchmarked. Such criticism should not dissuade anyone from making the comparison.

None of us are smart enough to think of everything that needs to be considered in a project with many components. Through benchmarking, we are able to apply the experience of others to enhance the success of our own project.

Conclusions

Very of few stakeholders notice when IT components work flawlessly. Everyone expects no less. When a component as small as ski goggles or routers is missing, under-sized or crashing, everyone will notice.

As project managers, it’s our responsibility to make sure that IT is all working harmoniously, or it’s not working at all.