Sunday 1 December 2013

Performance Testing Project Handover

First up, my apologies to the readers for being in hibernation for the better part of almost a year. Time has been flying by me as I moved from one company in one domain in one country, to another in another domain in another country. It has been hectic, to say the least.

So lets get down to business. I chose this topic because it is something I am in the middle of; my engagement with my current client ends soon and naturally I need to pass the baton so that project work is unaffected by the change in personnel - sort of like in a relay race. As in a relay race, the skills needed for taking the project (baton) from someone are different from skills needed to hand it over to someone else. I was fortunate enough to be at both ends of the handover - an account of both experiences follows.

I was involved with this project after about 3 weeks of joining the client's performance testing team. The first 3 weeks were about covering for my manager who was on a holiday, for another project that was going live after 2 weeks - more on this some other time, lets fast forward to the project in question. There was NO documentation - nothing. No use cases, no architecture information, no non-functional requirements, nothing. What I did have (and in the circumstances, it was a substantial asset) were 20+ working scripts and, crucially, the scenario designed and ready to run. There were also some benchmark results from previous runs in the not-too-distant past - I was told the client was happy with these numbers, so they became de-facto non-functional requirements (at least in terms of response times, throughput, basic monitoring such as CPU and memory usage etc).

Having the scripts and scenario in place was important because it allowed me time to learn the application while not coming in the way of project schedule. Yes, there were occasions when I got unusual requests to test a subset of the scripts (when I was not sure what scripts fit into that subset), but I managed. I gradually became comfortable with the application and was able to form other scenarios for different tests. The architecture picture formed in the mind after discussions and observations, and by now the engineering team was able to put a nice monitoring tool in place which simplified matters for me no end. I have been on this project for 5 months and am responsible for all its performance related aspects. However, as with all good things, my involvement will be over after a couple of weeks - I was asked to come up with a handover plan and based on my earlier experience, I decided to make life a little more pleasant for my successor.

Project documentation is important, but it doesn't have to be too verbose. The important thing is to follow in your mind the things that would be needed, and the sequence in which the actions need to be carried out. Any dependencies of actions will immediately fall into place. Once that map is clear in your mind, coming up with the necessary artifacts and the information therein should be straight forward. In this case, the scripts are functional and there is no great need for major overhaul - "if it ain't broke, don't fix it". What would be useful is:
  • a high level description of each script (just a couple of lines), 
  • a list of transactions (in the order they are executed)
  • parameters and correlations used in, and the think times before, each transaction. 
The scenario description needs to be more detailed - for each group, provide:
  • the number of users
  • ramp up and ramp down rates
  • start time relative to the scenario
  • peak execution period
  • injector used
  • iteration count
  • pacing
  • log details
  • think time replay
  • caching details
If there are any data related constraints, these need to be mentioned. Also important to mention are the monitoring details: the metrics being monitored, the servers, typical values before, during and after the test etc. If there are environment related dependencies, these should be outlined e.g. servers that are shared with other projects, any times at which the test should not be run etc. It is also important to account for any pre-test tasks (e.g. data preparation) and post test tasks (e,g, cleaning up DB tables). If there are any templates used for analysis these should also be part of the documentation.

It is almost never possible to provide a complete picture to another person taking over your project - there is so much information inside your head accumulated over time that is instinctive and "you just know it". This level of comfort comes with experience on the project but hopefully the things mentioned here, if provided, will make the transition easier. Please feel free to add your thoughts about this.

Tuesday 8 January 2013

Who Should Define Non-Functional Requirements


Non-functional requirements (NFRs) are the heartbeat of any performance testing exercise. Loosely stated, these are the desired performance attributes and (more importantly) the values that these attributes need to have for performance to become acceptable. Actual application performance is then validated against the NFRs. In a strict sense, performance itself is a non functional attribute - but for the sake of this discussion, we'll assume that performance subsumes other relevant non functional attributes.

When performance testers have initial meetings with other team members to kick start the performance testing exercise, the demands for performance are often stated in terms such as "good", "fast", "efficient" etc. - this, unfortunately, is a very common observation. Performance demands expressed this way are well intentioned but are extremely subjective and open to interpretation. They must be stated in quantifiable values that can be measured.

A likely reason for the subjective description of NFRs is the lack of performance engineering related know-how of the team (particularly the non-technical members). What often happens is an inclination on everyone else's part to entrust the performance tester with the task of defining what is "good", "fast", and "efficient (along with other desirable attributes) and to define values for these attributes. This is understandable since the performance tester has probably had experience with other (similar) projects from which these values can be taken as a general rule of thumb. However all projects are different and conclusions or observations from one cannot be packaged and applied to the others.

It is important that the attributes (and their values) that pertain to user interaction (e.g. web page response times, in a web based application) come from the people who will be using the system and who understand the business well. These need to be defined early on - perhaps at the conceptualization level, prior to the design phase. Other attributes such as efficiency (e.g. hardware resource usage - cpu, memory, disk, network etc) need to be addressed in the design phase - meaning having the necessary hardware in place to meet the earlier requirements, while at the same time considering the costs involved as well as other design considerations. If minimal hardware resources are available, then clear limits on their acceptable usage need to be defined. The best person to define these is the software or enterprise architect. It is he/she who will also define the maximum number of user sessions that can be supported concurrently as well as what happens under conditions of stress (e.g. when one of the application servers goes down and all traffic needs to be handled by the other) or when there are sudden peaks in traffic at certain points of time. An application may have other critical needs such as high availability and scalability, ability to quickly recover from failure, robustness etc. - again it is the software or enterprise architect who must ensure that these are properly addressed.

If the performance tester is brought in from another team (say, a team of specialized performance testers) and the scope of his/her work is restricted to the current release, then it is critical that they do not own the NFR process. It is ultimately the project team that has to live with the software and no matter how hard or painful it may be, they should be the ones to agree on the non functional attributes and their values. The performance tester can certainly advise on what performance attributes should be considered for measurement (and perhaps their possible values also) but the final decision must be of the project. Even here his/her hands are slightly tied - the hardware resource related decisions may have already been made and implemented (in case performance testing wasn't part of the initial discussions - a very likely scenario), so the performance tester can only advise based on ground realities rather than what business would ideally like.