Standards of evidence and evaluation self-assessment tool

We have set out standards of evidence that universities and colleges should aim to meet when they evaluate their work on access and participation. We have also produced guidance on how to strengthen evaluation by achieving good standards of evidence in impact evaluation.


Access and participation standards of evidence

See the full guidance on standards of evidence

We also have an evaluation self-assessment tool which assists providers in reviewing whether their evaluation plans and methodologies go far enough to generate high quality evidence about the impact of activities in their access and participation plans.


Evaluation self-assessment tool

Download the self-assessment tool


How to use the self-assessment tool

This document sets out the purpose of the tool and provides help on how to use the tool.

How should providers use standards?

Higher education providers should use standards of evidence:

  • in deciding what evidence to use to guide their decision about whether to invest in different types of interventions and practices, and to help them to improve their access and participation delivery and performance
  • to guide what kind of evidence is generated by their own impact evaluation and analysis and to clarify the claims that can be made when reporting the results of evaluations externally.

Three types of evaluation

The standards we have published use three types of evaluation which generate different types of evidence: narrative, empirical enquiry, consideration of causal claims.

  Description Evidence Claims you can make

Type 1: Narrative

 

The impact evaluation provides a narrative or a coherent theory of change to motivate its selection of activities in the context of a coherent strategy

Evidence of impact elsewhere and/or in the research literature on access and participation activity effectiveness or from your existing evaluation results

We have a coherent explanation of what we do and why

Our claims are research-based

Type 2: Empirical Enquiry

 

The impact evaluation collects data on impact and reports evidence that those receiving an intervention have better outcomes, though does not establish any direct causal effect

Quantitative and/or qualitative evidence of a pre/post intervention change or a difference compared to what might otherwise have happened

We can demonstrate that our interventions are associated with beneficial results.

Type 3: Causality

The impact evaluation methodology provides evidence of a causal effect of an intervention

Quantitative and/or qualitative evidence of a pre/post treatment change on participants relative to an appropriate control or comparison group who did not take part in the intervention

We believe our intervention causes improvement and can demonstrate the difference using a control or comparison group

Published 29 March 2023

Describe your experience of using this website

Improve experience feedback
* *

Thank you for your feedback