Consultation

Consultation on the future approach to quality regulation


Published 18 September 2025

Annex C: Additional background to our proposals

  1. This annex sets out background to our proposals, and includes summaries of:
    • the relevant conclusions of the public bodies review of the OfS
    • feedback that we received through our pre-consultation engagement
    • the findings of the TEF and B3 evaluations.

The public bodies review of the OfS

  1. The public bodies review of the OfS recommended that we bring quality assessment methodologies and activities together into an integrated system that drives improvement across the full range of providers.[19]
  2. In Section 5 on quality regulation, the review report describes the current system, sets out what the review heard, and sets out its conclusions, including that:
    1. The quality of higher education is and must remain a key priority for the OfS.
    2. Interactions between the two strands of quality activity (assessment of compliance with the B conditions and the TEF) are unclear and seemingly minimal.
    3. As the primary tool for incentivising improvement, the TEF is broadly welcomed by the sector. However, it is not mandatory for small providers and quality activity should be applicable to all providers.
    4. Baseline regulation can play more of a role in improvement.
    5. The OfS should bring together qualitative and quantitative intelligence to form a view of what quality above minimum standards looks like, and disseminate best practice and drive quality across the sector for all providers and students.
    6. By developing an integrated methodology of assessing quality, the OfS should work with the sector to embed a culture of continuous improvement that encourages excellence and innovation beyond the minimum standards. This approach can act as a critical feedback loop that reinforces the OfS’s role in enabling providers to undertake effective quality improvement.
    7. The OfS should develop an effective basket of qualitative predictive and lead indicators, that allow it to regulate quality boldly and confidently, anticipating, identifying, and then responding rapidly to address emerging risk.
    8. The OfS should be clearer with the sector and better articulate the risk factors that could lead to providers being selected for further regulatory intervention, including assessment visits and investigations.
    9. The OfS should reflect on how its standards and processes can demonstrate equivalence with European Quality Assurance Register for Higher Education requirements to enable English higher education providers to align to international standards. This should be done through constructive dialogue with the sector and government.

Pre-consultation engagement feedback

  1. Our current proposals have been shaped by engagement with sector and student groups since the beginning of the year. Our thinking has evolved in response to the feedback we have received as we have tested various ideas for what the future quality system could look like.
  2. Following the public bodies review, we initially presented ideas for a fully integrated model, which would involve carrying out assessments of all providers on a cyclical basis both to test whether they meet our minimum quality requirements and to drive improvement beyond these requirements. While there was broad support for the aims of the integrated system, many of those we spoke to raised concerns about the scale, cost and deliverability of a fully integrated approach. Instead, there was support for continuing with targeted risk-based assessments where we have concerns about a provider meeting our quality requirements and, alongside this, building on the TEF to carry out improvement-focused assessments of all providers to drive improvement across the sector.
  3. Some of the key areas on which we have received feedback are:
    • integration
    • simplification and provider burden
    • continuous improvement
    • inclusion of all providers
    • scope of assessments
    • ratings and outcomes
    • student input
    • assessment cycle and transition
    • alignment with the standards and guidelines for quality assurance in the European Higher Education Area.

Integration

  1. While there were concerns about a fully integrated system, the suggestion of integrating B3 assessments into future TEF assessments was generally welcomed and considered to make our assessment approach more straightforward for providers to engage with.

Simplification and provider burden

  1. Many of those we spoke to encouraged us to consider how we could simplify our approach and reduce burden for providers, particularly in the current financial context, and to ensure that we were taking an appropriately risk-based approach.

Continuous improvement

  1. There was strong support for a system that focuses on supporting and driving enhancement or continuous improvement. There were mixed views about whether this should be done by introducing an additional aspect to assess how well the provider improves, or whether this should be built into the system in other ways.

Inclusion of all providers

  1. There was agreement with the principle that all providers should be subject to regular quality assessment. Small and specialist providers highlighted the need for the assessments to work effectively for them, and to take account of the diversity of provision across the sector. Some also highlighted the limitations of our data indicators in terms of coverage of their students or applicability to their students’ intended outcomes. Providers’ suggestions in this area included more fully taking account of their context, varying our assessment approach for small providers and ensuring that there was good representation from all types of providers among the assessors. Some providers supported the inclusion of visits in assessments as they thought it would help assessors better understand their context, or potentially reduce the burden of preparing submissions.

Scope of assessments

  1. There was support for the extension of TEF assessments to taught postgraduate students, but providers queried the current lack of comparable data for these students and questioned how we would assess the student experience consistently before this was available. Many raised concerns about attempting to assess transnational provision at this time, also arising from the limitations of available data.

Ratings and outcomes

  1. We heard mixed views about retaining comparative ratings in our early discussions. Some of those we talked to suggested a kitemark-type approach, but others considered the lack of differentiation that would result in unhelpful and supported retaining ratings.
  2. In general, there was a preference for ratings to be at a level comparable with the ‘aspects’ in the last TEF exercise, rather than at a more granular level, for example at the level of each assessment criterion. We heard mixed views about whether there should be an overall rating.
  3. In response to the suggestion that we might not rate providers where we have insufficient data, those likely to be affected emphasised the need for us to present this in a way that avoided it being interpreted negatively.
  4. There were questions (including from students) about the value of provider-level outcomes to inform student choice.
  5. There was support for publishing assessment reports that highlight good practice and make recommendations for improvement and the provision of detailed, meaningful feedback that providers could act upon was viewed as important.

Student input

  1. We heard strong support for continuing with direct student input into assessments in the form of an independent student submission but, again, we were asked to consider how we could make this less burdensome for students. Small providers in particular encouraged us to think about alternatives to the student submission for those that do not have established student representation structures and where it would be difficult for individual students to put together a submission.

Assessment cycle and transition

  1. Those we spoke to accepted that we would need to move to a rolling approach to assessments if we were to assess all providers, but raised a number of potential challenges associated with this that would need to be addressed. These included:
    • the differing priorities between providers that we would need to account for in our scheduling approach
    • potential clashes in activity for providers, such as assessments clashing with APP submissions and the Research Excellence Framework
    • the complexities of the period where some providers would have ratings from the old scheme and others would have new ratings.
  2. We were encouraged to complete the first cycle of assessments as quickly as possible to mitigate some of the potential risks during the transitional approach.
  3. We also heard that moving to a rolling approach would make it more challenging to achieve consistency in outcomes than had been the case with a single-point exercise, and that we would need to ensure we had strong mechanisms in place for this.

Alignment with the standards and guidelines for quality assurance in the European Higher Education Area

  1. There were mixed views on whether the new system should aim to be compliant with European standards and guidelines, with some strongly favouring this and some saying it was not a priority for them.

Evaluation findings

Evaluation of our revised approach to student outcomes

  1. The main findings of our early evaluation of the impact of our revised approach to regulating student outcomes (through setting minimum numerical thresholds and carrying out assessments of compliance with condition B3) were that:
    1. Providers were already monitoring and seeking to improve student outcomes before we introduced the revised condition.
    2. The revised condition and data helped to strengthen their approaches, for example through incorporating the thresholds into their course level monitoring, helping them to identify and address potentially weaker areas of performance.
    3. Providers found our documentation complex, and some found it cumbersome to develop their knowledge of our methodology and analyse the data.[20]

Evaluation of the TEF

  1. We carried out a combination of commissioned and OfS-led work to evaluate TEF 2023.[21] This included commissioning IFF Research to evaluate the early impact of the 2023 TEF exercise on providers. The key findings from their work include:
    1. Views on the TEF among providers are largely positive, with most viewing the framework as fit for purpose.
    2. The potential reputational impacts of participating in and gaining a good TEF rating are important to providers, and this is therefore incentivising delivery of excellent student experience and outcomes (along with a range of other factors).
    3. Participation in the TEF 2023 exercise has enabled many providers to identify areas of strength and weakness in their performance and has led to some making decisions in a more evidence-led way. The TEF panel statements were welcomed and found helpful in this regard.
    4. The impact it has had varies across providers, with a more noticeable impact in providers with less well established, evidence-based quality systems.
    5. The change interventions introduced as a result of the TEF 2023 exercise are still somewhat limited, but that is not unexpected giving the timing of the evaluation.
    6. The key challenges for providers were the resource involved in participating and the timescales.
  2. Shortly after the submission deadline, we carried out a survey of student representatives who had been involved in producing student submissions to understand their experiences. We found that:
    1. The vast majority of respondents agreed that their overall experience of TEF involvement was positive and had helped them influence positive changes in learning and teaching at their provider.
    2. Respondents mostly described a very positive experience being supported by and working with their provider.
    3. Nearly all respondents felt their involvement had strengthened the student voice in discussions with providers about learning and teaching.
    4. Most respondents found the timeframes to produce the submission very challenging, and many felt there wasn’t enough time to do the submission alongside their workloads.
    5. Aside from managing timing and resources, the most common areas of challenge were in handling existing data and insights, and working with the student outcomes area.
  3. We also commissioned Savanta Research to gain an understanding of how applicants are using TEF information to inform their decisions. Based on six focus groups with prospective students, they found that: 
    1. Both student outcomes and teaching quality are seen as important factors by students when making choices. However, there is limited evidence that TEF ratings currently feature prominently in these decisions. 
    2. Where TEF ratings are used, they generally serve as a confirmatory tool rather than being a decisive factor, with Gold and Silver TEF ratings providing a sense of reassurance about the chosen institutions. 
    3. There is scope to increase the use and value of TEF information, including through: 
      1. General raising of awareness and understanding of the TEF. When learning more in the focus groups, applicants were positive about several elements of the TEF, including the focus on student experience and outcomes, the use of multiple sources of evidence, and the involvement of students. 
      2. The inclusion of more detail beyond the ratings – presented as concisely and directly as possible – to help applicants understand what TEF ratings represent. 
Published 18 September 2025

Describe your experience of using this website

Improve experience feedback
* *

Thank you for your feedback