The program level takes this a step higher. The definition varies from one organization to another but loosely, a program is an umbrella for several projects that are related in some way. This setup does not happen everywhere, so I was fortunate enough to be a part of one such instance. To put it simply, my job was to develop a framework for performance testing at the program level for this particular program. This was a largely hands-off role. Yes, I was ultimately responsible for catching performance related problems at the program level, but for that performance testing needed to happen first - that is what I was asked to achieve.
As I said, programs usually have multiple projects underneath them. This means there are certain program level objectives that the projects are designed to achieve. That's the good part - because everyone tends to have a common goal. The problem is that projects are largely autonomous - this reflects in their budgets, their scheduling, the direction they assume within the program space. Each project would have their own project manager, who would drive the project in their own footprint. Any program level activity, let alone performance testing, means taking divergent forces and pulling them in a single common direction.
As a program performance test manager, one of the top priorities is to ensure that project stakeholders understand (and commit to) the importance of performance testing and the value it brings - both for their own project as well as the program. To do this, a good understanding of all the projects is essential. If they are heterogeneous by way of technology and/or the business proposition they address, then attainment of this goal becomes that much harder. It is very common to have detailed discussions with technical architects and business analysts to understand these facets. Once this is done, then comes the convincing task. This requires great communication and selling skills - everyone is likely to appreciate the benefits; resistance comes when the price to be paid for these benefits is brought to the table.
Non-functional requirements are hard to define at the project level - formulating them at the program level is harder still. The main reason is the conflict of interest between projects over a particular metric. As an example, if project A executes transaction Z in a certain amount of time, this might be too quick for project B which might want transactions X and Y to be executed before transaction Z. Both business analysts and technical architects of all projects need to have a view of the performance needs of the entire program.
Another major challenge is synchronizing schedules. If the projects form a linear chain, then output of one link of the chain is input to the other; in this scenario it is critical that the scheduling puzzle is solved correctly. What makes matters worse is that not all projects (even the sibling links in the chain) are of the same size/complexity; each may itself be designed to solve an isolated business problem; resourcing needs differ greatly across projects. All these factors merit consideration when analyzing the performance dynamics of projects at the individual and program levels.
Project collaboration is essential to make the performance framework successful. This means sharing artifacts such as documentation, scripts and/or test data (where relevant), adjusting project schedules to suit other projects or managing the use of common hardware platforms. The rewards of a program performance framework are a complete performance picture of the business program - this offers great value in terms of user experience, customer acquisition and retention, brand value enhancement etc. While performance metrics obtained at individual project levels serve an important purpose to the different business units they are part of, there is much more meaning as well as business value in channeling and combining these metrics to get a more congruent and cohesive performance picture.