Good measurement ensures an efficient approach and accurate data

Client: Ministry of Education

Good measurement ensures an efficient approach and accurate data

Standard of Proof recently provided support for a national randomised trial in New Zealand. Interesting in this case, the evaluation randomly allocated “clusters” of people (rather than individuals) into control and treatment groups. Why is this important? Because your sample size needs to be significantly larger in such evaluation designs.

Let’s take an example. A class includes a group of students (ie, a “cluster”). If your evaluation and activity are being delivered across multiple schools, how can you randomise students easily and efficiently? One approach (more) common in clinical trials is to embed a clustered design element in your evaluation; in this example, we would randomise by class – our “cluster” – rather than by student.  Although this may be a simpler allocation approach, it may not be as efficient for data collection and analysis.

Clustered data have a pre-existing group structure. Using this class example again, we would reasonably assume that children’s achievement scores are in some way influenced by their cluster (i.e., the teacher, their class, the characteristics of having enrolled in this specific school, etc). This “relatedness” in the cluster will have a significant effect on your sample size requirements. If measuring progress at the student level, what may have been a study requiring 200 participants may now require 2000.

Such considerations are useful to know up-front. Given the right balance, you can certainly design an efficient and practical randomised trial.”

Implementation plans support collaboration

Client: Ministry of Education

“Implementation plans support collaboration”

Evaluation frameworks typically set out the focus and scope of the evaluation (the ‘what’), including the approach and methods, indicators and rubrics (the ‘how’), and a high-level timeline for the deliverables (the ‘when’). Frameworks also often suggest ways of working, and one often-heard suggestion is that the evaluation team will or should work “collaboratively” with key stakeholders.  But what does “collaboration” mean in practice, and how will it affect the evaluation?

Experience has shown that individuals interpret collaboration differently. Nevertheless, researchers have identified a number of key factors that influence successful collaboration, such as leadership, communication (Hogue, Parkins, Clark, Bergstrum & Slinski, 1995; Keith, Perkins, Zhou, Clifford, Gilmore & Townsend, 1993), participation and membership (Keith et al,1993, Borden, 1997). Instruments are also available to help guide successful collaboration (Borden & Perkins, 1999), but how such factors are embedded into the evaluation process is not often made explicit.

Evaluation plans sometimes omit details about collaboration, such as defining the persons (the ‘who’) and capabilities (the ‘why’) that will take part in the different tasks (the ‘what’).  However, making these elements of collaboration explicit provides an opportunity to support your evaluation standards.

Standard of Proof recently tested the feasibility of an “impossible” group randomised trial. While testing the overall system for feasibility, our work focussed on developing the structures and processes to ensure the highest standard of evidence to inform decision-making. The approach required collaboration, and as one element of our efforts, we built an adaptive management process around the Joint Committee Standards, ensuring people (the ‘who’) were engaged in relevant activities (the ‘what’) to support the utility, accuracy, feasibility, propriety and accountability of the evaluation (the ‘why’). Pulling together the expertise and the explicit approach to collaboration made the “impossible” randomised trial possible.

Beyond the ‘who’, ‘what’ and ‘why’, it is important to define the time required to realise the collaborative effort (the ‘how much’). When delivering large-scale or challenging evaluations, such “hidden” costs can break an evaluation, often resulting in low participation and engagement, and poor quality data. By making the ‘how much’ elements of collaboration explicit, you will increase your opportunity to successfully undertake a challenging large-scale evaluation with quality evidence.”