Evaluation frameworks typically set out the focus and scope of the evaluation (the ‘what’), including the approach and methods, indicators and rubrics (the ‘how’), and a high-level timeline for the deliverables (the ‘when’). Frameworks also often suggest ways of working, and one often-heard suggestion is that the evaluation team will or should work “collaboratively” with key stakeholders. But what does “collaboration” mean in practice, and how will it affect the evaluation?
Experience has shown that individuals interpret collaboration differently. Nevertheless, researchers have identified a number of key factors that influence successful collaboration, such as leadership, communication (Hogue, Parkins, Clark, Bergstrum & Slinski, 1995; Keith, Perkins, Zhou, Clifford, Gilmore & Townsend, 1993), participation and membership (Keith et al,1993, Borden, 1997). Instruments are also available to help guide successful collaboration (Borden & Perkins, 1999), but how such factors are embedded into the evaluation process is not often made explicit.
Evaluation plans sometimes omit details about collaboration, such as defining the persons (the ‘who’) and capabilities (the ‘why’) that will take part in the different tasks (the ‘what’). However, making these elements of collaboration explicit provides an opportunity to support your evaluation standards.
Standard of Proof recently tested the feasibility of an “impossible” group randomised trial. While testing the overall system for feasibility, our work focussed on developing the structures and processes to ensure the highest standard of evidence to inform decision-making. The approach required collaboration, and as one element of our efforts, we built an adaptive management process around the Joint Committee Standards, ensuring people (the ‘who’) were engaged in relevant activities (the ‘what’) to support the utility, accuracy, feasibility, propriety and accountability of the evaluation (the ‘why’). Pulling together the expertise and the explicit approach to collaboration made the “impossible” randomised trial possible.
Beyond the ‘who’, ‘what’ and ‘why’, it is important to define the time required to realise the collaborative effort (the ‘how much’). When delivering large-scale or challenging evaluations, such “hidden” costs can break an evaluation, often resulting in low participation and engagement, and poor quality data. By making the ‘how much’ elements of collaboration explicit, you will increase your opportunity to successfully undertake a challenging large-scale evaluation with quality evidence.”