Our Evaluation Practice
We encourage high standards of evidence. We specialise in applying experimental and quasi-experimental designs that are relevant to the context. We embed these designs in our mixed methods approach to conduct evaluations focused on impact, programme activities and context, model comparisons and overall learning.
We promote relevant and inclusive processes, often working with our clients and stakeholders to ensure an appropriate evaluation. We provide technical assistance to evaluation teams.
We focus on informing decisions, helping people access quality evidence to inform decisions throughout the implementation process to inform success, and at the end for accountability purposes.
Our Research Practice
We review and synthesise evidence, including everything from reviewing literature, documents and data, to undertaking meta-analyses. We synthesise these data so they are useful for their intended purpose
We analyse data, using the most appropriate qualitative or quantitative techniques for both secondary and primary data. We use the IDI and explore other administrative datasets.
Our Monitoring Practice
We inspire progress through evidence. Data is vital when working with multiple stakeholders to achieve “collective impact”. We develop theories of change with stakeholders, to ensure measures are relevant and have the buy-in required to succeed. We also develop and assess measures to inspire confidence and ensure monitored evidence supports the overall goals.
We make quality data accessible. We create reports that pull together relevant data, and in real time, so it is useful for everyone who can benefit from data and continuous improvement – implementation teams, managers and funders.