Sunday, 9 March 2014

Program Performance Test Management - Challenges and Rewards

Performance testing mostly happens at the project level - the stakeholders tend to be from the same group and have a common interest in the performance of that particular product. The resources - both human and financial - also tend to come from or be sanctioned by the same team. There is usually one central authority that drives the effort.

The program level takes this a step higher. The definition varies from one organization to another but loosely, a program is an umbrella for several projects that are related in some way. This setup does not happen everywhere, so I was fortunate enough to be a part of one such instance. To put it simply, my job was to develop a framework for performance testing at the program level for this particular program. This was a largely hands-off role. Yes, I was ultimately responsible for catching performance related problems at the program level, but for that performance testing needed to happen first - that is what I was asked to achieve.

As I said, programs usually have multiple projects underneath them. This means there are certain program level objectives that the projects are designed to achieve. That's the good part - because everyone tends to have a common goal. The problem is that projects are largely autonomous - this reflects in their budgets, their scheduling, the direction they assume within the program space. Each project would have their own project manager, who would drive the project in their own footprint. Any program level activity, let alone performance testing, means taking divergent forces and pulling them in a single common direction.

As a program performance test manager, one of the top priorities is to ensure that project stakeholders understand (and commit to) the importance of performance testing and the value it brings - both for their own project as well as the program. To do this, a good understanding of all the projects is essential. If they are heterogeneous by way of technology and/or the business proposition they address, then attainment of this goal becomes that much  harder. It is very common to have detailed discussions with technical architects and business analysts to understand these facets. Once this is done, then comes the convincing task. This requires great communication and selling skills - everyone is likely to appreciate the benefits; resistance comes when the price to be paid for these benefits is brought to the table.

Non-functional requirements are hard to define at the project level - formulating them at the program level is harder still. The main reason is the conflict of interest between projects over a particular metric. As an example, if project A executes transaction Z in a certain amount of time, this might be too quick for project B which might want transactions X and Y to be executed before transaction Z. Both business analysts and technical architects of all projects need to have a view of the performance needs of the entire program.

Another major challenge is synchronizing schedules. If the projects form a linear chain, then output of one link of the chain is input to the other; in this scenario it is critical that the scheduling puzzle is solved correctly. What makes matters worse is that not all projects (even the sibling links in the chain) are of the same size/complexity; each may itself be designed to solve an isolated business problem; resourcing needs differ greatly across projects. All these factors merit consideration when analyzing the performance dynamics of projects at the individual and program levels.

Project collaboration is essential to make the performance framework successful. This means sharing artifacts such as documentation, scripts and/or test data (where relevant), adjusting project schedules to suit other projects or managing the use of common hardware platforms. The rewards of a program performance framework are a complete performance picture of the business program - this offers great value in terms of user experience, customer acquisition and retention, brand value enhancement etc. While performance metrics obtained at individual project levels serve an important purpose to the different business units they are part of, there is much more meaning as well as business value in channeling and combining these metrics to get a more congruent and cohesive performance picture.

Sunday, 1 December 2013

Performance Testing Project Handover

First up, my apologies to the readers for being in hibernation for the better part of almost a year. Time has been flying by me as I moved from one company in one domain in one country, to another in another domain in another country. It has been hectic, to say the least.

So lets get down to business. I chose this topic because it is something I am in the middle of; my engagement with my current client ends soon and naturally I need to pass the baton so that project work is unaffected by the change in personnel - sort of like in a relay race. As in a relay race, the skills needed for taking the project (baton) from someone are different from skills needed to hand it over to someone else. I was fortunate enough to be at both ends of the handover - an account of both experiences follows.

I was involved with this project after about 3 weeks of joining the client's performance testing team. The first 3 weeks were about covering for my manager who was on a holiday, for another project that was going live after 2 weeks - more on this some other time, lets fast forward to the project in question. There was NO documentation - nothing. No use cases, no architecture information, no non-functional requirements, nothing. What I did have (and in the circumstances, it was a substantial asset) were 20+ working scripts and, crucially, the scenario designed and ready to run. There were also some benchmark results from previous runs in the not-too-distant past - I was told the client was happy with these numbers, so they became de-facto non-functional requirements (at least in terms of response times, throughput, basic monitoring such as CPU and memory usage etc).

Having the scripts and scenario in place was important because it allowed me time to learn the application while not coming in the way of project schedule. Yes, there were occasions when I got unusual requests to test a subset of the scripts (when I was not sure what scripts fit into that subset), but I managed. I gradually became comfortable with the application and was able to form other scenarios for different tests. The architecture picture formed in the mind after discussions and observations, and by now the engineering team was able to put a nice monitoring tool in place which simplified matters for me no end. I have been on this project for 5 months and am responsible for all its performance related aspects. However, as with all good things, my involvement will be over after a couple of weeks - I was asked to come up with a handover plan and based on my earlier experience, I decided to make life a little more pleasant for my successor.

Project documentation is important, but it doesn't have to be too verbose. The important thing is to follow in your mind the things that would be needed, and the sequence in which the actions need to be carried out. Any dependencies of actions will immediately fall into place. Once that map is clear in your mind, coming up with the necessary artifacts and the information therein should be straight forward. In this case, the scripts are functional and there is no great need for major overhaul - "if it ain't broke, don't fix it". What would be useful is:
  • a high level description of each script (just a couple of lines), 
  • a list of transactions (in the order they are executed)
  • parameters and correlations used in, and the think times before, each transaction. 
The scenario description needs to be more detailed - for each group, provide:
  • the number of users
  • ramp up and ramp down rates
  • start time relative to the scenario
  • peak execution period
  • injector used
  • iteration count
  • pacing
  • log details
  • think time replay
  • caching details
If there are any data related constraints, these need to be mentioned. Also important to mention are the monitoring details: the metrics being monitored, the servers, typical values before, during and after the test etc. If there are environment related dependencies, these should be outlined e.g. servers that are shared with other projects, any times at which the test should not be run etc. It is also important to account for any pre-test tasks (e.g. data preparation) and post test tasks (e,g, cleaning up DB tables). If there are any templates used for analysis these should also be part of the documentation.

It is almost never possible to provide a complete picture to another person taking over your project - there is so much information inside your head accumulated over time that is instinctive and "you just know it". This level of comfort comes with experience on the project but hopefully the things mentioned here, if provided, will make the transition easier. Please feel free to add your thoughts about this.

Tuesday, 8 January 2013

Who Should Define Non-Functional Requirements


Non-functional requirements (NFRs) are the heartbeat of any performance testing exercise. Loosely stated, these are the desired performance attributes and (more importantly) the values that these attributes need to have for performance to become acceptable. Actual application performance is then validated against the NFRs. In a strict sense, performance itself is a non functional attribute - but for the sake of this discussion, we'll assume that performance subsumes other relevant non functional attributes.

When performance testers have initial meetings with other team members to kick start the performance testing exercise, the demands for performance are often stated in terms such as "good", "fast", "efficient" etc. - this, unfortunately, is a very common observation. Performance demands expressed this way are well intentioned but are extremely subjective and open to interpretation. They must be stated in quantifiable values that can be measured.

A likely reason for the subjective description of NFRs is the lack of performance engineering related know-how of the team (particularly the non-technical members). What often happens is an inclination on everyone else's part to entrust the performance tester with the task of defining what is "good", "fast", and "efficient (along with other desirable attributes) and to define values for these attributes. This is understandable since the performance tester has probably had experience with other (similar) projects from which these values can be taken as a general rule of thumb. However all projects are different and conclusions or observations from one cannot be packaged and applied to the others.

It is important that the attributes (and their values) that pertain to user interaction (e.g. web page response times, in a web based application) come from the people who will be using the system and who understand the business well. These need to be defined early on - perhaps at the conceptualization level, prior to the design phase. Other attributes such as efficiency (e.g. hardware resource usage - cpu, memory, disk, network etc) need to be addressed in the design phase - meaning having the necessary hardware in place to meet the earlier requirements, while at the same time considering the costs involved as well as other design considerations. If minimal hardware resources are available, then clear limits on their acceptable usage need to be defined. The best person to define these is the software or enterprise architect. It is he/she who will also define the maximum number of user sessions that can be supported concurrently as well as what happens under conditions of stress (e.g. when one of the application servers goes down and all traffic needs to be handled by the other) or when there are sudden peaks in traffic at certain points of time. An application may have other critical needs such as high availability and scalability, ability to quickly recover from failure, robustness etc. - again it is the software or enterprise architect who must ensure that these are properly addressed.

If the performance tester is brought in from another team (say, a team of specialized performance testers) and the scope of his/her work is restricted to the current release, then it is critical that they do not own the NFR process. It is ultimately the project team that has to live with the software and no matter how hard or painful it may be, they should be the ones to agree on the non functional attributes and their values. The performance tester can certainly advise on what performance attributes should be considered for measurement (and perhaps their possible values also) but the final decision must be of the project. Even here his/her hands are slightly tied - the hardware resource related decisions may have already been made and implemented (in case performance testing wasn't part of the initial discussions - a very likely scenario), so the performance tester can only advise based on ground realities rather than what business would ideally like.

Thursday, 6 December 2012

Managing Performance Troubleshooting Projects



Software performance troubleshooting can be defined and interpreted in several ways - for the sake of this discussions it refers to dealing with post-production performance related problems. A related term is performance tuning but I consider that to be improvement over existing performance metrics. Performance troubleshooting projects tend to be guerrilla assignments where one has to act quickly and with precision. My experience is that managing such projects requires a slightly different mindset compared to a normal performance testing project.

First things first: Why do systems have performance related issues once they are in production? There can be many causes; unexpectedly high surges of traffic, not enough thought given to non-functional requirements at the system architecture level, poorly specified non-functional requirements, incorrect assumptions about how test environment scales to production, dependency on an external system etc. A logical question to ask is, weren't all these things accounted for during the normal performance testing phase before going to production? That's a fair point, but it needs to be understood that performance testing itself is performed under extremely tight timelines. Much like functional testing, it is impossible to test each and every conceivable scenario in performance testing also. Furthermore, no matter how much one desires, the testing and production environments are very seldom identical - this alone can be a source of many issues. The fact is that production systems, with their live data and real time traffic, are so complex that simulating them with 100% accuracy in the test environment is almost impossible - hence the difficulty in predicting production issues.

In these cases, obviously prevention is better than cure - but as I have tried to argue above, some things cannot be prevented. When performance testing for such projects has been completed (to whatever extent possible), it is critically important that potential production issues are highlighted as risks in the final document. This will go a long way towards understanding the situation when its time to perform troubleshooting tests. The documented risks should be the starting point of the troubleshooting exercise. The tests to run should be few and need to be planned very carefully so that the problem is identified as quickly as possible. The tests should be run in the production environment and with live data. This in itself is a very tricky proposition - what should be the load on the environment; if its too much, the system performs slower and affects live users in adverse ways thereby potentially losing business; if its too little, simulating the actual problem might be difficult. Can certain components be switched off so as to isolate the problem - if so for how long? If additional data flows are needed, where do they come from? Then there are legal/contractual issues to consider, such as possible testing of any external systems (if allowed, then who does it?). All these are judgement calls and need to be made accurately and quickly (often in real time), both by the performance test specialist in charge as well as the project team.

Performance troubleshooting is performed by experienced specialists who have done it before. The time frame is obviously very short - business loses money every minute that the problem persists. To avoid cluttering, there is usually just one person who handles the hands on as well as performance test management duties. The project team treats these projects with a great deal of urgency - tasks to support the performance test specialist are expedited, there is a general willingness on their part to let the specialist take the lead and guide the effort forward. This implies added responsibility on the specialist to try and achieve the desired goal with the given set of constraints.

Performance troubleshooting is an expensive exercise since it happens very late into the product cycle. Unfortunately, systems that have had production issues once are at risk of being afflicted with them again - particularly if the remedies to the first issue involve changes to shared code repositories. It is therefore very important that performance test managers, when they do have the opportunity, insist on pre-production performance testing to be as thorough as possible. If the exhibited performance is not satisfactory or if enough testing has not been performed (for whatever reason), the clear recommendation should be to not risk going live or at the very least, the risks of doing so need to be communicated in writing and must be fully understood.

Thursday, 20 September 2012

Which Comes First - Performance Testing Or Test Automation?


At the very outset of this post, let me say that I am not a test automation specialist at all. I have done test automation for a couple of products but performance testing is much more my forte. I have however, been on projects where both these exercises have been performed on a particular release. My aim here is to put a road map around how to go about doing this.

First up, some views about test automation. My opinion is that it is not testing at all - its development. The tests that are automated are the functional tests - test automation just does the same things through an automated process. To that end, test automation does not tell us something about the system that functional testing does not. Sure, it tells us the same things a lot quicker and with greater efficiency and that saves a lot of precious time for other activities - but nothing more. Unlike performance testing, which is highly recommended and could cost management a lot if not done, test automation is not critical to the health of the system. In my experience, it has been somewhat rare for projects to do both performance testing and test automation - usually its one or the other (performance testing tends to win out), but the discussion here assumes both are performed.

In the first release, there will not be a regression test library of automated tests to run, so test automation needs to proceed hand in hand with functional testing. Test cases that are candidates for automation need to be identified very early on. As functional testing progresses, regression testing of tested functionality should be done through the test automation scripts. The way this helps performance testing is that the system is likely to be in a stable state for scripting, given that functional testing has reduced the functional errors to a certain degree. Furthermore, the scripts for some automation test cases could be exported to the performance testing tool for performance testing (this is possible at least in HP's suite) - not every automation test case will be tested for performance, so one has to choose. By the time the second release comes around, there is a library of automated regression test cases that should be run and expanded (as part of the functionality in the new release) as often as needed. Performance testing stands to benefit the same way as before.

One of the pitfalls to avoid when following this road map is to fall into the temptation of reducing the scope of (or doing away with entirely) performance testing if test automation execution reports tell us that all is well and dandy with the system. This can happen when pressure to release the product on schedule increases. It must be remembered that no matter how well functional testing went (in terms of removing defects) and how quickly regression testing was completed due to test automation, performance testing is about testing the non-functional attributes of the system. It's aims are entirely different. Functional testing and test automation results will provide information about what works in the system for a single user - it says nothing about how well it would work when under load and/or when system resources are stressed.

A lot of other issues such as choice of test cases to automate, which ones of those to use for performance testing etc. are out of scope for this discussion and will be left for another day. In general, test automation and performance testing offer value in different ways - it is up to the test manager and the performance test manager to figure out ways to get the most out of these, as well as how one benefits from the other.


Thursday, 23 August 2012

No Perfectionism, Please!!!

Like all variants of software testing, performance testing too is eventually about better product quality. If you've taken on performance testing projects, you know the feeling when you first get your hands on the system. You want it to quickly do whatever it is supposed to do (sometimes - initially at least - with little regard for the complexities associated behind the scenes with different user actions). Ironically, you start to feel better when the system isn't so fast after all - the sinister intent here being that you then get to try out a million things to see how they impact performance.

All this lends itself to an attitude heading towards perfection - and that is a mistake bordering on blasphemy in magnitude. Anyone worth their salt in software testing will tell you that no amount of testing is enough (enough, meaning that any further testing will not improve quality). This is a noble thought to have but we (testers) live in a world of shrinking product cycle times, budget constraints and developers (those evil people who take up 80% of the product resources). Performance testing, lo and behold, is an even farther afterthought. System testers have to test the system for functionality, bugs need to be fixed and retested (which opens up another pandora's box of issues). Finally performance testers are called in to do their thing with the "go-live" date firmly stamped on their screens.

In such circumstances the perfectionist will have the hardest time earning his bread. It is critically important to make best use of the little time and resources that are available to performance testing. Non-functional requirements need to be clearly defined by the business team and communicated (by God, when has this actually happened in reality?). Performance test scenarios need to be realistic and should be aimed at finding out as much as possible about the system. If initial tests indicate performance problems, then executing the exact same test with more users with a view to increase the load on the system, makes little sense. As performance test managers, a judgement call needs to be made about what can realistically be achieved in the time available - scope, scenarios, expectations then need to be adjusted accordingly.

More often than not, you will see that the system does not conform to the agreed set of non-functional requirements. This is fine and should be communicated back to business along with the facts about what was discovered about system performance (business, by the way, are not likely to take this very well). As a performance test specialist/manager, this does greater service to the project team than simply executing some standard tests with the production loads and showing them the results - these tests have value after lower loads have shown no significant problems with performance.

At the very heart of the matter, as testers (of whatever variety), we are there to point at problems in the system. The more problems we can find, the more we contribute towards the quality of the end product. However, it is not possible to find out every problem - this is an uncomfortable truth, but a truth nonetheless. Moreover, testers are not there to fix problems - the developers do that and what is or is not important enough to be fixed is entirely out of our hands. What we as performance testers can and must do, is to relay as much information as possible about system performance and outline the risks (if any) that we feel the system will run into if it were to go to production with this level of performance. Having a perfectionist's attitude makes it hard to swallow that pill.