Sunday 1 December 2013

Performance Testing Project Handover

First up, my apologies to the readers for being in hibernation for the better part of almost a year. Time has been flying by me as I moved from one company in one domain in one country, to another in another domain in another country. It has been hectic, to say the least.

So lets get down to business. I chose this topic because it is something I am in the middle of; my engagement with my current client ends soon and naturally I need to pass the baton so that project work is unaffected by the change in personnel - sort of like in a relay race. As in a relay race, the skills needed for taking the project (baton) from someone are different from skills needed to hand it over to someone else. I was fortunate enough to be at both ends of the handover - an account of both experiences follows.

I was involved with this project after about 3 weeks of joining the client's performance testing team. The first 3 weeks were about covering for my manager who was on a holiday, for another project that was going live after 2 weeks - more on this some other time, lets fast forward to the project in question. There was NO documentation - nothing. No use cases, no architecture information, no non-functional requirements, nothing. What I did have (and in the circumstances, it was a substantial asset) were 20+ working scripts and, crucially, the scenario designed and ready to run. There were also some benchmark results from previous runs in the not-too-distant past - I was told the client was happy with these numbers, so they became de-facto non-functional requirements (at least in terms of response times, throughput, basic monitoring such as CPU and memory usage etc).

Having the scripts and scenario in place was important because it allowed me time to learn the application while not coming in the way of project schedule. Yes, there were occasions when I got unusual requests to test a subset of the scripts (when I was not sure what scripts fit into that subset), but I managed. I gradually became comfortable with the application and was able to form other scenarios for different tests. The architecture picture formed in the mind after discussions and observations, and by now the engineering team was able to put a nice monitoring tool in place which simplified matters for me no end. I have been on this project for 5 months and am responsible for all its performance related aspects. However, as with all good things, my involvement will be over after a couple of weeks - I was asked to come up with a handover plan and based on my earlier experience, I decided to make life a little more pleasant for my successor.

Project documentation is important, but it doesn't have to be too verbose. The important thing is to follow in your mind the things that would be needed, and the sequence in which the actions need to be carried out. Any dependencies of actions will immediately fall into place. Once that map is clear in your mind, coming up with the necessary artifacts and the information therein should be straight forward. In this case, the scripts are functional and there is no great need for major overhaul - "if it ain't broke, don't fix it". What would be useful is:
  • a high level description of each script (just a couple of lines), 
  • a list of transactions (in the order they are executed)
  • parameters and correlations used in, and the think times before, each transaction. 
The scenario description needs to be more detailed - for each group, provide:
  • the number of users
  • ramp up and ramp down rates
  • start time relative to the scenario
  • peak execution period
  • injector used
  • iteration count
  • pacing
  • log details
  • think time replay
  • caching details
If there are any data related constraints, these need to be mentioned. Also important to mention are the monitoring details: the metrics being monitored, the servers, typical values before, during and after the test etc. If there are environment related dependencies, these should be outlined e.g. servers that are shared with other projects, any times at which the test should not be run etc. It is also important to account for any pre-test tasks (e.g. data preparation) and post test tasks (e,g, cleaning up DB tables). If there are any templates used for analysis these should also be part of the documentation.

It is almost never possible to provide a complete picture to another person taking over your project - there is so much information inside your head accumulated over time that is instinctive and "you just know it". This level of comfort comes with experience on the project but hopefully the things mentioned here, if provided, will make the transition easier. Please feel free to add your thoughts about this.