Thursday 23 August 2012

No Perfectionism, Please!!!

Like all variants of software testing, performance testing too is eventually about better product quality. If you've taken on performance testing projects, you know the feeling when you first get your hands on the system. You want it to quickly do whatever it is supposed to do (sometimes - initially at least - with little regard for the complexities associated behind the scenes with different user actions). Ironically, you start to feel better when the system isn't so fast after all - the sinister intent here being that you then get to try out a million things to see how they impact performance.

All this lends itself to an attitude heading towards perfection - and that is a mistake bordering on blasphemy in magnitude. Anyone worth their salt in software testing will tell you that no amount of testing is enough (enough, meaning that any further testing will not improve quality). This is a noble thought to have but we (testers) live in a world of shrinking product cycle times, budget constraints and developers (those evil people who take up 80% of the product resources). Performance testing, lo and behold, is an even farther afterthought. System testers have to test the system for functionality, bugs need to be fixed and retested (which opens up another pandora's box of issues). Finally performance testers are called in to do their thing with the "go-live" date firmly stamped on their screens.

In such circumstances the perfectionist will have the hardest time earning his bread. It is critically important to make best use of the little time and resources that are available to performance testing. Non-functional requirements need to be clearly defined by the business team and communicated (by God, when has this actually happened in reality?). Performance test scenarios need to be realistic and should be aimed at finding out as much as possible about the system. If initial tests indicate performance problems, then executing the exact same test with more users with a view to increase the load on the system, makes little sense. As performance test managers, a judgement call needs to be made about what can realistically be achieved in the time available - scope, scenarios, expectations then need to be adjusted accordingly.

More often than not, you will see that the system does not conform to the agreed set of non-functional requirements. This is fine and should be communicated back to business along with the facts about what was discovered about system performance (business, by the way, are not likely to take this very well). As a performance test specialist/manager, this does greater service to the project team than simply executing some standard tests with the production loads and showing them the results - these tests have value after lower loads have shown no significant problems with performance.

At the very heart of the matter, as testers (of whatever variety), we are there to point at problems in the system. The more problems we can find, the more we contribute towards the quality of the end product. However, it is not possible to find out every problem - this is an uncomfortable truth, but a truth nonetheless. Moreover, testers are not there to fix problems - the developers do that and what is or is not important enough to be fixed is entirely out of our hands. What we as performance testers can and must do, is to relay as much information as possible about system performance and outline the risks (if any) that we feel the system will run into if it were to go to production with this level of performance. Having a perfectionist's attitude makes it hard to swallow that pill.