In this blog, Jon Kay, Head of Evidence Synthesis and International at the Education Endowment Foundation (EEF), reflects on what we’ve learned about effective evidence mobilisation, and what this means for the future.
This week, members of the Evidence for Education Network gathered in Santiago for our annual conference. It was an important opportunity to reflect on some of the lessons we’ve learnt about the most important aim of the network, ensuring evidence makes a difference to children and young people’s lives.
The amount of evidence being generated is growing, but it’s not enough on its own
The quantity of research aimed at understanding the impact of different education approaches and interventions is growing rapidly. In England, the EEF has funded over 200 impact evaluations that rigorously measure the impact of a particular approach on attainment. We also collect comparable cost information and conduct implementation and process evaluations across our studies, to make sure that policy and practice relevant information on how and why things work, and their cost effectiveness is made freely available.
This increase is not unique to England. The evidence base is growing around the world too. Across the EEN, our global trials fund has allowed our members to share processes and standards, allowing evidence generation to be more comparable and cost-effective.
While this is hugely welcome, evidence generation does not necessarily translate into evidence use. Increasing the amount of high-quality information available will only be worthwhile if it can support policymakers and practitioners to improve outcomes for children.
Good starting points for mobilising evidence are through plain language summaries and scaling programmes
The need to improve the communication and use of evidence is not new. Many organisations and initiatives are working to improve the use of evidence. Some great examples include the work of the OECD to understand the barriers and enablers of research use, or the new What Works Hub that plans to focus on building understanding of implementation in education. There’s also been an increase in accessible summaries of evidence and in scaling effective approaches to bring them to a greater number of settings.
- Plain language: necessary but not sufficient
Let’s start with the focus on improving accessibility. Accessibility might increase a policymaker or practitioners ability to understand a research finding – if evidence is not accessible then it cannot be used. However, without the appropriate infrastructure it can increase confusion, or lead to behaviours that are not evidence based. If a policymaker is faced with 10 different studies on how to provide effective early language support, how do we want them to make their decisions?
- Look through all ten studies and then choose the one they agree with most?
- Choose the study that is most accessible and read that one?
- Choose the study where they have the most positive relationship with the research team?
- Critically engage with the evidence base as a whole to understand the quality of evidence, overall messages, as well as the approach most likely to improve outcomes?
This is, of course, a simplification. But increasing the accessibility of research can bring risks. If all research is accessible there is the risk of evidence being cherry-picked. If some evidence is accessible, then decisions might be made on the basis of the communications skills of individual researchers rather than the strength of evidence itself.
- Scaling effective programmes: welcome, but not a silver bullet
A more concrete way of using evidence to change behaviour is by scaling programmes. At the EEF, one of the case studies we are proudest of is the Nuffield Early Language Intervention. After multiple positive randomised controlled trials showing cost-effective impact, we have worked with the Department for Education in England to bring the programme to scale. It is now available across the country and over half of reception age classes use the programme.
Examples like NELI are undoubtedly exciting. A combination of evidence and capacity building is allowing access and use of effective approaches that will help drive improvements across the system. We should, however, not get caught in the trap of thinking that all evidence led improvements can come from scaling programmes.
At a system level, scaling programmes may itself not be scalable. Evidence-informed education requires many decisions that are based on spreading knowledge of good practice, and it seems unlikely that the end point of an evidence based system will simply be a list of ten programmes that we want to all schools to use.
So while we need to make sure that we continue to scale effective programmes, we also need to work on building a cumulative knowledge base of practices and approaches that can be adapted to everyday decision making and to answering the questions of policymakers and practitioners.
Taking a systems level approach to evidence use
So what does the future of evidence mobilisation look like? How do we move towards an effective evidence mobilisation system that continues with the positive aspects of programmatic scaling and plain language summaries, but addresses the limitations of these approaches?
Here are three principles to consider:
1. Build a cumulative knowledge base through evidence synthesis
We need to shift away from focusing on how we disseminate the results of individual studies, to collating a shared knowledge base that is accessible to policy and practice.
Evidence synthesis allows us to avoid cherry-picking results. Rather than effectively communicating an individual study, we comprehensively synthesise findings across the evidence base and then ensure that it is the synthesis that is accessible. High quality synthesis also allows us to begin moving beyond individual programmes, through identifying common components of effective practice that can be adapted and implemented away from a rigid programme delivery.
New approaches like “living synthesis” represent a promising future. In education, we can build a shared infrastructure that can be the starting point for a cumulative knowledge base. Knowledge will still need to be translated and applied appropriately in context, but shared synthesis can avoid many of the risks of disseminating one off results.
2. Improve the co-ordination of evidence use by building national brokerage functions or EdLabs
Many of the challenges in evidence mobilisation don’t stem from poor strategies, but from poor coordination between different actors in an education system. The recent movement towards EdLabs offers a solution here.
If evidence-use is going to be responsive to the needs of policy and practice, we need to answer the question “where do people go to get evidence?”. In many systems the current answer is a collection of NGO authored reports, academic journals and policy papers. There is no coordination of infrastructure to make the evidence available. In many cases academics actively compete to get their research used.
Building independent evidence functions can allow systems to broker evidence into policy or practice through coordinating, synthesising and mobilising the evidence that already exists. Key characteristics of an EdLab that might play these roles are:
- Genuine local ownership. Part of the synthesis and translation piece requires understanding of how to ensure evidence is relevant for different contexts. This means moving away from global NGOs owning the evidence-use process.
- Building of trust through transparent and rigorous standards of evidence. In order to be effective, trust is a precondition of any evidence broker. For trust to be maintained, any broker needs to communicate findings in a transparent, secure, and consistent way.
- Strong relationships with research, practice, and policy. This means co-ordinating efforts to summarise evidence and facilitate the production of new evidence to answer the questions that matter.
3. Consider multiple pathways to impact
A final principle to consider is how we embed knowledge within education systems in ways that support evidence informed practice. This requires proactive engagement with the different levers that change behaviour in the system. In England, the EEF have:
- Worked with the Department for Education to embed evidence into teacher training
- Funded “research schools” to communicate, exemplify and embed research findings with the schools in their region
- Produced free to access resources, posters and guidance documents for schools that summarise the evidence
- Worked on regional campaigns with local advocates
- Scaled programmes with the government
- Worked alongside other middle tier actors like subject specialists and unions to consider how evidence can be used
- Scaling programmes is one part of improving practice and embedding evidence use, but we should also consider how to spread knowledge of best practice that teachers and policymakers can apply more directly.
A final challenge
Looking to the future, we need to think about how to create sustainable evidence-use systems in education. This will need to include multiple pathways to impact, be supported by rigorous synthesis, and ultimately be owned by institutions that can objectively translate and broker evidence.
Of course, none of this will be possible without the right research. While this blog focuses on evidence use rather than evidence generation, we also need to improve the responsiveness of the evidence we produce to practice. This means moving from making evidence be used, to making evidence that people want to use. This won’t be easy. Building relationships between practice and research and innovative methodologies that can build knowledge of approaches rather than programmes are key challenges we will need to embrace in the future.
In the next years of the EEN, our aim is to improve evidence generation, synthesis and use across the world. Our mission isn’t an easy one, but the ultimate aim of improving outcomes for socio-economically disadvantaged children and young people is too important for us to ignore.









