Consultation on the future approach to quality regulation
Published 18 September 2025
Section 2: The future TEF
- This section sets out our proposals to modify the Teaching Excellence Framework, which would be at the heart of the future integrated system that would apply to all providers. The current approach to the TEF, consulted on in 2022, was implemented for the first time in 2023. We are not proposing an entirely new approach, but rather to modify the TEF to build on what worked well and to create a more integrated overall system.
- The TEF assessments would be desk-based, with decisions made by academic experts and student representatives, who would consider evidence from providers, direct input from students, and OfS-produced data indicators.
- We propose a number of modifications to extend the reach and strengthen the impact of the TEF in driving improvement across the full range of providers, including the smallest. Our proposals include options for varying the approach for providers where the OfS data is limited.

Proposal 2: Providers in scope
We propose to assess and rate all OfS-registered providers through the future TEF, on a cyclical basis, with rolling assessment cycles.
- In the previous TEF, it was optional for smaller providers (those that had fewer than 500 students and did not have data in at least two of the TEF indicators) to participate. We assessed 227 providers in the 2023 TEF, and a further 154 eligible providers opted not to take part.
- We are now proposing over time to assess all registered providers, and we would revise condition B6 to reflect this requirement. All providers with undergraduate students would be included in the first cycle, and those with only taught postgraduate students would be included from the second cycle onwards. There are currently 431 providers on the OfS register, of which 392 had undergraduate students in the 2023-24 student data return and 13 had only postgraduate students.[3]
- We propose this because we want students at all providers to benefit from high quality and improvement to their experience and outcomes. There was strong agreement in our engagement with sector and student groups with the principle that all providers should be subject to quality assessment. Additionally, should the proposal to integrate B3 assessments be implemented, it will allow us to review student outcomes at every provider on a regular basis in an efficient way, allowing us to identify and take action in more cases where student outcomes are not sufficiently positive.
- One of the implications of extending the reach of the TEF to assess all providers, in combination with integrating an assessment of student outcomes against minimum thresholds, is that it would not be feasible to carry out all the assessments in one year. We propose to carry out future TEF assessments on a cyclical basis, with rolling assessment cycles, as discussed under Proposal 11: Assessment cycle, and we invite you to comment on the assessment cycle under that proposal.
- If we apply the TEF to all registered providers, we would seek to design the assessment approach to recognise the diversity of the sector, and we acknowledge that a ‘one size fits all’ approach may not be appropriate. We would also seek to ensure the approach is proportionate, in relation to the scale of providers’ activity and the risks to students of not receiving high quality provision. These considerations are reflected in our proposals to:
- Simplify the range of information providers would submit to the TEF, with a stronger focus on the student experience, and more limited information in relation to student outcomes. (see Proposals 5 and 7).
- Explore alternatives to a written student submission, such as commissioning focus groups, so that students at all providers can contribute their views (see Proposal 10: Student evidence and involvement).
- Vary the approach where providers do not have sufficient indicator data for the assessments. This is most likely to affect smaller or newer providers (see Proposal 9: Varying the approach for providers with limited data).
- Vary the incentives, interventions and the frequency of assessment for providers depending on the level of quality and risk (See Proposal 13; Incentives and interventions).
- Schedule most providers without a current TEF rating for their first assessment in the later years of the first cycle (as outlined in Proposal 11: Assessment cycle). This would allow more time for those unfamiliar with the TEF to prepare and could improve availability of data for some.
- Recruit more academic and student assessors with experience and understanding of higher education at smaller, specialist and college-based providers (see Proposal 8: Assessment and decision making).
- We are also interested in views on further actions we could take to ensure the future TEF operates proportionately and effectively for smaller providers and those that have not previously taken part. For example, these could include suggestions to provide more tailored support and guidance for different types of providers, or more detailed (optional) templates for preparing submissions.
Question 2a
What are your views on the proposal to assess all registered providers?
Question 2b
Do you have any suggestions on how we could help enable smaller providers, including those that haven’t taken part in the TEF before, to participate effectively?
Proposal 3: Provision in scope
We propose to assess undergraduate provision in the first cycle of assessments and to extend the scope to include postgraduate taught provision in the second cycle.
Students in scope for the first cycle
- We propose that the first cycle should focus on the quality of undergraduate provision, including the same levels of qualification covered in the 2023 TEF.
- This would include all undergraduate courses that a provider has responsibility for. This includes courses taught by the provider and courses taught by other providers in England through partnership arrangements, whether through subcontractual arrangements (franchised provision) or through validation of qualifications.
- As with the previous TEF, students taught through partnership arrangements would be considered in the assessments of both the lead provider and the teaching provider. This reflects the responsibilities of both providers for ensuring high quality and positive outcomes for these students. For student information purposes, we would present the ratings of the teaching provider and consider how best to link to information about the lead provider.
- We propose to present data separately for the students taught by a provider, and students taught elsewhere through partnership arrangements, to make differences in quality transparent for these two groups (see Annex G for further details). We have considered proposing to rate a provider’s taught provision and its partnership provision separately, but we do not consider that the benefits of doing so would outweigh the additional costs and complexity. Instead, our view is that it would be appropriate for material differences in quality between taught and partnership provision to have a limiting effect on a provider’s ratings.
- We are considering whether apprenticeship provision should be within the scope of the assessments. Including apprenticeships in the assessment of the student experience could constitute double regulation of this provision, given the responsibilities of Ofsted in this area. However, Ofsted does not make judgements about student outcomes or the quality of assessment practice and there may be an opportunity to reduce burden in relation to our role delivering external quality assurance of endpoint assessment. It may be appropriate for us to consider student outcomes and assessment for apprenticeship provision. We would welcome feedback on the extent to which apprenticeships should be included in the future TEF.
Extending the scope for future cycles
- We would aim to extend the scope of future TEF assessments to include taught postgraduate (PGT) provision, to extend the incentives of the TEF to benefit these students. When extending the TEF to include PGT provision, we envisage that providers would be rated separately for their undergraduate and their PGT provision.
- One of the reasons for not having included taught postgraduate provision in the TEF previously is the lack of sector-wide, comparable data for areas other than student outcomes. We recognise that this will continue to be a challenge in the short term, which is why we are proposing to focus on undergraduate provision in the first cycle. We would, during the first cycle, undertake work to develop a survey of taught postgraduate students. Further detail about this is available in Section 5.
- The Department for Education has recently published plans for introducing modular provision eligible for funding under the Lifelong Learning Entitlement. The first cycle of TEF assessments would consider the quality of the courses within which funded modules sit. We need to develop additional measures that will allow us to assess the experience and outcomes of students undertaking modular provision specifically, and would aim to include these in the second cycle of TEF assessments.[4]
- We have considered feedback received through our sector engagement about the complexity of extending assessments to transnational education and postgraduate research provision, as well as general comments about ensuring that what we propose to implement is deliverable. Our conclusion is that we should not begin any implementation work to support assessment of these areas at this time, but instead ensure that we design the framework to be sufficiently flexible to allow us to incorporate them in subsequent future cycles, subject to further consultation.
- The overall design we are proposing in this consultation would also allow us in future to extend the scope of our assessments: for example, to integrate a judgement about the effectiveness of a provider's governance arrangements. We would expect to consult further before extending the exercise in that way.
Question 3a
Do you have any comments on what provision should be in scope for the first cycle? You could include comments on areas such as:
- the inclusion of apprenticeships
- the proposal to look separately at partnership provision.
Question 3b
Do you have any comments on the proposed approach to expanding assessments to include taught postgraduate provision in future cycles?
Proposal 4: Assessment aspects and ratings
We propose to assess and rate providers for ‘student experience’ and ‘student outcomes’, and to generate ‘overall’ provider ratings based on these two aspect ratings.
Aspects of assessment
- We propose the future TEF would continue to assess two broad ‘aspects’: the student experience and student outcomes. Each provider would be rated for the overall quality of the student experience across all of its undergraduate courses; and the overall student outcomes from all of its undergraduate courses. (From the second cycle, there would be separate ratings for the postgraduate student experience, and postgraduate student outcomes.)
- In the previous TEF, we sought broad alignment between the TEF aspects and our B conditions. In future, we propose that the scope of the two aspects and the TEF rating categories would be more integrated and aligned with the requirements of the B conditions, so that our system provides a clear view of quality delivered by different providers.
- In broad terms:
- The student experience aspect would consider the quality of course content and delivery, assessment, academic support, resources, and student engagement. These align with conditions B1, B2 and the ‘effective assessment’ element of condition B4.
- The student outcomes aspect would consider students’ success in and beyond their studies, in terms of continuation and completion, and measures of further study and employment outcomes. This aligns with condition B3.
- We also considered introducing an additional aspect to assess how effective a provider is at quality improvement. This would be based on a provider submission reflecting on areas for improvement, its approaches and the impact of its improvement activity. We considered this could help to increase the effect of the TEF in driving improvement. However, it would also increase the burden and complexity of the assessment, both for providers and TEF assessors and would not add a clear benefit in the clarity of information for students about the level of quality they can expect. Overall, we do not consider that the benefits would outweigh the additional burden and cost. Instead, we propose to increase the impact of the TEF by strengthening and varying the incentives and interventions as set out in Section 3. We consider this approach will more strongly incentivise improvement, and result in intervention where it would be of greatest benefit to students, while reducing burden on providers that are already delivering the highest levels of quality.
- We also consider that retaining a focus on the student experience and student outcomes, in the way we propose, would create a cycle of continuous improvement over time. We consider that providers support their students to achieve positive outcomes by delivering a high quality student experience. We propose to assess the student experience based on recent activity (including the provider’s efforts to improve the student experience), and to assess student outcomes based on measures that are more retrospective. Assessing both aspects in this way would mean a provider’s recent actions to improve the student experience would be assessed in one cycle, and if successful should lead to improved outcomes that would be apparent in the next cycle.
- Equality of opportunity would be embedded in the assessment of each aspect, by considering the extent to which the quality of the student experience and outcomes are consistent for all groups of students. Our initial proposals on the criteria for higher ratings build in the notion of consistency across student groups.
- Our proposed approach to each of the aspects is set out in more detail under Proposals 5 and 7. In addition, under Proposal 9: Varying the approach for providers with limited data, we set out how we could vary the approach to assessing the student experience and student outcomes aspects if the OfS indicators for a provider are insufficient.
TEF ratings
- We consider TEF ratings and their publication to be a helpful mechanism to incentivise providers to deliver the highest levels of quality for their students. The student experience and student outcomes aspects would continue to be rated individually, as in the previous TEF, and we also propose a simplified way to generate an overall rating for a provider based on the two aspect ratings.
- We are not proposing to change the number of categories for the ratings nor their names. This is because previous research indicated that students found ‘Gold’, ‘Silver’, ‘Bronze’ and ‘Requires improvement’ to be clear.[5]
- We propose to change the descriptions of the rating categories to align them with the requirements of the B conditions and to improve their clarity and the distinctions between the categories. Alongside this we would clarify that the requirements set out in the B conditions represent the minimum level of quality required, rather than representing high quality. In summary, we propose that:
- Gold ratings would represent the highest level of quality, signifying that quality is consistently outstanding for the provider’s mix of students and courses.
- Silver ratings would represent high quality that is consistently and materially above the minimum requirements for the provider’s mix of students and courses.
- Bronze ratings would align with meeting our minimum quality requirements, as expressed in the relevant B conditions of registration. This would be a change from the previous TEF, where Bronze was intended to represent quality above the minimum requirements.
- Requires improvement ratings would signal there are concerns about meeting the minimum requirements and improvement is needed. This would be a change from the previous TEF, where this category signalled that improvement was required ‘to be awarded a TEF rating’.
- Where the Silver and Gold rating categories refer to consistency, we propose this would relate to both:
- consistency across the criteria for that aspect
- consistency across the provider’s mix of students and courses, and different areas of provision (such as full-time and part-time, and taught and partnership provision).
- Under Proposals 5 (The student experience aspect) and 7 (The student outcomes aspect), we provide initial thoughts on the criteria for the TEF rating categories for each aspect, and about how consistency would be considered.
Overall ratings
- We are considering the advantages and disadvantages of retaining an ‘overall’ provider rating, in addition to the aspect ratings. On balance we think it would be beneficial to generate an overall rating, but in a way that avoids additional workload for the TEF assessors. In the previous TEF the panel spent significant time and effort weighing up all the evidence and factors to decide an overall rating, whenever the aspect ratings were not the same as each other. If retaining an overall rating, we propose to do so in a way that avoids this additional work for assessors.
- Our research with prospective students indicated that the overall ratings had some value as a ‘confirmatory tool’, largely after students had already made choices. It is unclear from the research so far whether publishing aspect ratings alone, without an overall rating, would be more or less useful to applicants. However, the research did find there is scope to improve the value of the information by ensuring that clear, succinct information, which is more specific about what can be expected, is presented alongside a provider’s ratings.
- In terms of the advantages of an overall rating, there are areas of our proposals where this would be helpful, for example to determine the frequency of assessment (Proposal 11) or what incentives or regulatory interventions would apply to a provider (Proposal 13). In addition, it would be helpful for overall ratings to link to fee levels as currently determined by the Department for Education (DfE).
- We propose therefore to introduce a rule-based approach for determining an overall rating for each provider, which would not require additional work or judgement by TEF assessors. We consider the most appropriate and clearest ‘rule’ would be that a provider’s overall rating would be the same as its lowest aspect rating. This aligns with our proposals that a Silver aspect rating would require consistently high quality, and a Gold aspect rating would require consistently outstanding quality. By extension, a Silver overall rating would signify consistently high quality across both aspects and a Gold overall rating would signify consistently outstanding quality across both aspects. Where one aspect is ‘Requires improvement’ this would be the appropriate overall rating because there would be material concerns about the provider delivering the minimum quality requirements.
- We propose that the overall ratings would be published alongside the aspect ratings, with a clear explanation of their meaning.
Question 4a
What are your views on the proposal to assess and rate student experience and student outcomes?
Question 4b
Do you have any comments on our proposed approach to generating ‘overall’ provider ratings based on the two aspect ratings?
Proposal 5: The student experience aspect
We propose to:
- align the scope and ratings criteria for the student experience aspect with the requirements of conditions B1, B2 and B4
- assess the student experience on the basis of provider submissions, an expanded set of NSS-based indicators, and additional evidence from students.
- We consider that the approach to assessing the student experience aspect in the previous TEF broadly worked well, and there is insufficient reason to make substantive changes to that approach.
Scope and criteria
- The main change we propose is to align the scope and the ratings criteria of the student experience aspect with the requirements of the relevant B conditions (B1, B2 and the ‘effective assessment’ element of B4). This means, in broad terms, the student experience aspect would consider the quality of course content and delivery, assessment, academic support, resources, and student engagement.
- As referred to under Proposal 4: Assessment aspects and ratings, we are proposing changes to the descriptions of the TEF rating categories, so that ratings would indicate clearly whether a provider meets or exceeds the minimum quality requirements. We would produce ratings criteria for the student experience aspect setting out:
- What would constitute consistently outstanding quality, required for a Gold rating.
- What would constitute consistently high quality (materially above the minimum requirements) required for a Silver rating.
- What would constitute quality that is in line with the minimum requirements, required for a Bronze rating. This would be set out in clear, succinct descriptions of what is required by the relevant B conditions.
- The factors that might lead to a Requires improvement rating. These would be based on material concerns arising from the assessment, about a provider meeting the minimum requirements. Note that this rating would not in itself mean that a provider is in breach of the relevant quality conditions, because a more detailed assessment would probably be needed to establish this (see Section 3 for further discussion of how TEF ratings relate to decisions about breaches of conditions).
- In Annex H we provide initial thoughts on how the criteria for Gold, Silver and Bronze ratings could be presented. We would refer providers and assessors to the relevant B conditions for more detailed descriptions and definitions of the minimum requirements.
- We recognise that the requirements of condition B1 are set out at course level, and we are seeking to align the criteria for a provider-level TEF assessment with these. Our initial thoughts about the criteria in Annex H for ‘course content and delivery’ (at Bronze level in particular) are described at the course level. We would welcome views and suggestions about whether these criteria should be framed differently for a provider-level assessment, or about what evidence could demonstrate the requirements of condition B1 are met, at a provider level.
- As indicated in Annex H, we propose that the criteria for Silver and Gold student experience ratings would include the consistency of the student experience. This would involve considering, across the range of criteria, whether the student experience is of high quality (for Silver) or outstanding (for Gold) across a provider’s subjects, areas of provision and student groups, including students from underrepresented and disadvantaged groups. Where there is inconsistency and the data indicates lower quality for some groups, subjects or areas of provision, the assessors would consider:
- information in the provider’s submission about how it has been addressing those differences
- whether it would be appropriate for any lower quality areas to affect the overall rating for student experience, taking into account the full range of provision.
Evidence and assessment
- As with the previous TEF, the assessment of the student experience aspect would be based on a combination of evidence submitted by a provider, input from its students, and benchmarked indicators drawn from the NSS.
- The timeframe covered by the student experience aspect would also remain broadly the same as the previous TEF. We are considering options for whether this should be a standard four-year period, or the period since the last assessment (which could vary, as set out under Proposal 11).
- We consider it important that the assessment of the student experience aspect should take into account students’ own views of their experience, and it remains our view that the NSS is a valuable means of gathering these insights. However, as the NSS-based indicators are not direct measures of the quality of the student experience, they would continue to be only part of the evidence considered in the assessment of this aspect, and would need to be supplemented with other evidence from the provider and its students.
- Alongside the five NSS-based indicators we used in the previous TEF, we propose the addition of one further indicator, based on the ‘Learning opportunities’ theme of the NSS.[6] These questions ask about students’ exploration of ideas and concepts in depth, whether their course introduced subjects and skills in a way that built on what they had already learned, and the balance between directed and independent study. We consider this theme has direct relevance to condition B1, which refers to the coherence of courses in respect of the breadth and depth of content and the key concepts and skills covered by the course, and to the balance of delivery methods and of directed and independent study.
- In the previous TEF, students provided further direct input into the assessment of the student experience aspect through an optional student submission. We are considering ways to strengthen student input through a more focused student submission, and exploring other ways of obtaining student views where a student submission is impractical for the student body. We would intend to gather direct input from students at all providers. Alternatives to a student submission could include, for example, running focus groups with a provider’s students. We discuss this further under Proposal 10: Student evidence and involvement.
- The provider submission would contain evidence determined by the provider – as relevant to its context – that demonstrates how it meets the student experience ratings criteria. Similarly to the previous TEF, a provider would be guided to supply evidence that demonstrates the impact and effectiveness of its approaches to delivering a high quality or outstanding academic experience for its students. We would also continue to expect the evidence to cover all students and courses within the scope of the TEF assessment. This could mean the inclusion of evidence related broadly to all the provider’s students and courses, as well as to more specific interventions or improvements for particular groups of its students or courses.
- There would continue to be a page limit for provider submissions, to limit burden for providers and TEF assessors. Feedback from the previous TEF suggested the 25-page limit worked well. We would not expect to raise this limit, and we anticipate our proposal to streamline the assessment of student outcomes in future (see Proposal 7) could enable the page limit to be reduced.
- In making a judgement on the student experience aspect, the TEF assessors would draw on their expertise to interpret and weigh up whether the evidence in the round suggests the provider is delivering the minimum level or higher levels of quality across its students and courses. Assessors would triangulate evidence from across the NSS, the provider and student submissions (or alternative student input). We would produce guidelines for assessors on how to place weight on the different sources of evidence and how to take account of the particular context of the provider and its students.
- As in the previous TEF, the NSS-based indicators would be benchmarked to show how positive a provider’s students are about aspects of their experience, compared with the views of similar students on similar courses across the sector. As with the previous TEF we expect that indicators that are materially above benchmark would (alongside other evidence in the submissions) suggest outstanding quality. Indicators that are broadly in line with benchmark would suggest high quality. While we do not set minimum thresholds for the NSS-based indicators as we do for student outcomes, we expect that indicators materially or consistently below the benchmark, with insufficient explanation by the provider and a lack of other evidence of how minimum requirements are being met, would suggest concerns about quality.
- We recognise that for some providers the NSS indicators are unavailable, or there is insufficient statistical certainty to place weight on them in the assessment. We have therefore considered alternative ways of gathering student views to inform the assessment of the student experience aspect in these cases. This is discussed further under Proposal 9: Varying the approach for providers with limited data.
Question 5a
What are your views on the proposed scope of the student experience aspect, and how it aligns with the relevant B conditions of registration?
Question 5b
What are your views on our initial thoughts on the criteria for the student experience rating (at Annex H)? You could include comments on:
- whether the ‘course content and delivery’ criteria suggested in Annex H should be framed differently for a provider-level assessment
- whether there is clear enough differentiation between each level, and how this could be improved.
Question 5c
What are your views on the evidence that would inform judgements about this aspect? You could include comments on issues such as:
- what evidence could demonstrate the requirements of condition B1 are met at a provider level
- whether the submission page limit should be reduced
- the proposed inclusion of indicators based on the ‘Learning opportunities’ theme of the NSS.
Proposal 6: A revised and integrated condition B3
We propose to revise and simplify our minimum requirements for student outcomes (condition B3), and integrate into the future TEF an assessment of whether a provider meets them.
- As indicated under Proposal 1: A more integrated overall system, we propose to simplify condition B3 and integrate an assessment of the minimum required student outcomes into future TEF assessments. This would produce a clear single view about the outcomes delivered by a provider. It would reduce duplication of effort both for providers and the OfS in monitoring separate data and carrying out separate assessments where there are concerns about meeting the minimum student outcome thresholds. It would also allow us to assess whether each registered provider is meeting minimum requirements for student outcomes on an ongoing basis. Our view is that this would help ensure, in an efficient way, that all providers are at least delivering the minimum level of positive outcomes for their students.
- We are considering simplifying condition B3 and integrating an assessment against minimum student outcomes into the TEF, as follows:
- We would revise condition B3 so that we continue to require providers to meet minimum thresholds for continuation and completion, but remove the requirement to meet minimum thresholds for progression. Our reasons for this are set out at paragraph 96. This would mean removing the progression indicator and its associated thresholds from the requirements of condition B3.
- We would aim to integrate the B3 student outcomes indicators with the benchmarked TEF indicators. Alongside the benchmarked indicators we would clearly show any areas where a provider’s continuation or completion rates are below the relevant minimum threshold. Within each mode and level of study that the thresholds are set at, this could relate to particular subjects, areas of provision or student groups, including students from underrepresented or disadvantaged backgrounds.[7]
- In their TEF submissions, providers would be invited to provide relevant contextual information that might justify any continuation or completion rates below threshold, especially where these are also below the provider’s benchmark.
- We propose to revise condition B3 so that only factors that explain historical performance would be considered as justifying below threshold outcomes (for the reasons set out at paragraphs 99 to 100). Improvement actions a provider has taken, or plans to take, but have not yet resulted in improved outcomes would no longer justify outcomes below the threshold. These actions would instead be considered separately by the OfS, when considering whether any intervention is needed to ensure improvement (as described at paragraphs 100 and 210).
- When assessing student outcomes in the TEF, we would initially identify any below-threshold indicators that are sufficiently material and are not explained by reference to benchmark performance, to warrant further consideration. We would then consider whether the relevant contextual information submitted by the provider justifies the below threshold performance.
- The provider would be awarded a rating of Requires improvement for student outcomes if there are continuation or completion indicators that are below minimum thresholds and are not justified, and these are considered to be material to the overall outcomes it delivers. These below-threshold outcomes may relate to particular subjects, student groups or areas of provision.
- The OfS would also make any decisions about a breach or increased risk of future breach of condition B3, and about any appropriate regulatory intervention. This would include consideration of actions the provider has taken, or plans to take, to improve student outcomes. These OfS decisions are discussed under Proposal 13.
- The proposed amendments to condition B3 would also apply to our assessments of providers that apply to become registered with the OfS. In the second stage consultation we would consult on how the amended condition would apply as an initial condition of registration where the applying provider has student outcomes data, as well as an ongoing condition for registered providers.
Rationale for the revisions to condition B3
- The proposal to remove the progression indicator from condition B3 follows from feedback we have received from the sector about technical limitations with this measure and the Graduate Outcomes survey data. For example, some courses are intended to lead to certain jobs that are not classified as professional or managerial, and there is limited information about graduates’ ‘interim’ activities prior to the survey census date. We have reflected on the use of this measure for compliance as opposed to improvement purposes. Instead of regulating against a minimum numerical threshold for progression, we propose to use a broader set of benchmarked employment-related indicators to inform the TEF assessment (see Proposal 7), which between them would be intended to capture a wider set of positive post-study outcomes.
- We have heard from students about the importance of their higher education improving their employment outcomes, so we are proposing improvements to the way this is considered in the TEF assessment (see Proposal 7) but without regulating employment outcomes against minimum thresholds. This approach also aligns with feedback that the continuation and completion measures are those most directly within the control of the provider; and with continuation being the least lagged indicator, suggesting that these are the most appropriate measures for which to set minimum thresholds that all providers are required to meet.
- Current guidance for assessments of compliance with ongoing condition B3 sets out the contextual factors that could potentially justify outcomes that are below the relevant threshold.[8] These factors fall into two broad groups:
- factors that may explain the reasons for a provider’s historical performance.
- actions a provider has taken, or will take, to improve its performance, and the extent to which these actions are credible.
- We propose that in future, when considering whether below-threshold student outcomes are justified by context, we would consider only factors that may explain the reasons for a provider’s historical performance. We propose in future to consider the following types of contextual justification:
- The provider’s mix of students and courses. These are taken into account in the provider’s benchmarks. If a provider is below the minimum threshold but meets its benchmark, this would normally be considered as sufficient justification.
- Other historical factors that explain the indicators being below threshold, for example where there were historical inaccuracies in a provider’s data, or specific reasons why the construction of the indicator did not provide an accurate measure of continuation or completion for that provider.
- We propose to no longer consider actions taken or planned by a provider to improve student outcomes as a justification of why these outcomes were historically below the threshold. This is because any positive effect of the actions would in time become apparent in the data, and at that point would affect the assessment. Our proposed approach, which focuses on the measurable impact of any actions, is more robust and less burdensome than trying to evaluate the likely effect of actions on future outcomes. Instead, we propose to consider actions that have not yet affected the measured outcomes as relevant to our assessment of whether there is an increased risk of a future breach, and in understanding whether intervention is necessary to ensure the provider makes sufficient improvements. Our experience of B3 assessments to date is that planned actions would be more appropriate to consider in deciding whether intervention is needed, than in judging whether outcomes below the threshold were justified.
The following is an illustrative example of the approach, if the proposed revisions to condition B3 are applied.
A provider’s data shows outcomes below threshold (and below benchmark) in the years covered by the assessment. The provider has recently discontinued courses that had caused these outcomes to be below threshold. Under a revised condition B3, the fact that the provider recently closed these courses would not be considered as a justification for the outcomes being below threshold during the period under assessment, and the provider’s student outcomes would be rated as ‘Requires improvement’ for that period. The OfS would be likely to find a historical breach of condition B3, and would then consider whether any intervention would be appropriate to ensure improvements are made. As part of this, the OfS would consider the provider’s actions in closing the courses. If data analysis shows the effect of closing the courses would bring the outcomes above threshold, the OfS would be likely to decide there was not an increased risk of a future breach, and would not be likely to take any further action. The effect of the closed courses would feed through into the provider’s outcomes data in future years, resulting in a more positive assessment in future TEF assessments.
- We are proposing that the first cycle of TEF assessments would cover undergraduate provision, and therefore integrated assessments against the requirements of condition B3 would focus on undergraduate provision for the for first cycle. A revised condition B3 would still apply to all levels of higher education, but we do not envisage carrying out routine assessments against condition B3 for postgraduate courses until the second cycle of TEF assessments. During the first cycle, we would take a risk-based approach to assessing postgraduate student outcomes. We may, if the data indicates significant concerns about outcomes for a provider’s postgraduate students, select that provider for a targeted assessment as described in Section 3.
- In 2022, when we set the minimum numerical thresholds for condition B3, we said we would normally review the thresholds every four years. We now propose to remove the thresholds for progression, and we expect to review the thresholds for continuation and completion during 2026. We expect this review to be light touch, rather than an in-depth review of each threshold. We would not expect to revise a threshold unless there has been a material change in sector-wide data since 2022. We would consult on proposals arising from the review, including any potential changes to thresholds, as part of the next consultation in 2026.
- Subject to the outcomes of this consultation in this area, we will set out the detail of proposed changes to Condition B3 and Regulatory advice 20: Regulating student outcomes, in our consultation on detailed proposals for the future TEF.[9]
Question 6
Do you have any comments on our proposed approach to revising condition B3 and integrating the assessment of minimum required student outcomes into the future TEF? You could include comments on areas such as:
- removing the progression indicator from condition B3
- how contextual factors would be considered at different stages in the process.
Proposal 7: The student outcomes aspect
We propose to rate student outcomes based on benchmarked indicators of continuation, completion and a broader set of post-study indicators, and taking contextual factors into account.
Scope and criteria
- We think there are advantages to streamlining the assessment of student outcomes, to focus on a set of outcome measures based on available data. This would make the assessments more comparable and less burdensome and complex. We propose to use indicators for continuation and completion and a broader set of measures of post-study and employment outcomes. These would all be benchmarked to take into account the context of a provider’s students, courses and other factors. Some additional context could also be taken into account through the provider submission. We propose to avoid the additional complexity and burden that would be involved if providers were invited to submit detailed information about their approaches to delivering positive outcomes and their own measures, including measures of educational gain (see paragraphs 121 to 128).
- As explained under Proposal 4: Assessment aspects and ratings, we propose changes to the descriptions of the TEF rating categories so that ratings would show clearly whether and how far a provider exceeds the minimum requirements. Table 1 includes our initial thoughts on potential ratings criteria for the student outcomes aspect. These are based on two ‘anchor points’:
- The minimum absolute thresholds for student outcomes. Continuation or completion outcomes below the minimum thresholds required by a revised condition B3 would result in a Requires improvement rating (if not justified by context). Outcomes above these minimum thresholds would be needed for a rating of Bronze or above.
- A provider’s benchmarks for the student outcome measures. These are sector averages, adjusted for the provider’s types of courses, student characteristics and other factors. We consider that the sector as a whole delivers positive outcomes that are materially above the minimum thresholds, so if a provider performs in line with its benchmarks this would signify high quality (as long as these outcomes are also above the minimum thresholds). Performance materially above benchmarks would signify outstanding outcomes. Performance below benchmarks (but meeting the thresholds) would signify sufficient outcomes. When interpreting performance against benchmarks, context would be taken into account (so, for example, an indicator below benchmark could be considered high quality if justified by context).
Table 1: Student outcomes ratings criteria
Rating | Description and criteria |
---|---|
Gold |
Student outcomes are consistently of outstanding quality Outcomes are consistently outstanding and materially exceed what would be expected for the provider’s mix of students and courses:
|
Silver |
Student outcomes are consistently of high quality Outcomes materially exceed minimum requirements and are what would be expected for the provider’s mix of students and courses:
|
Bronze |
Student outcomes are of sufficient quality Outcomes meet minimum requirements, but are below what would be expected for the provider’s mix of students and courses:
|
Requires improvement |
Student outcomes require improvement Outcomes are below minimum requirements:
|
- In suggesting these criteria, we are interested in how we can embed consideration of equality of opportunity. In common with the student experience aspect, we are proposing that a provider must demonstrate that it is delivering consistently high quality (on benchmark) outcomes for a Silver rating, and consistently outstanding (above benchmark) outcomes for a Gold rating, for students from all backgrounds.
- Assessors would consider consistency across subjects, different areas of provision (for example, full-time and part-time courses, and taught and partnership provision) and student groups, including students from underrepresented and disadvantaged groups. Where there is inconsistency and the data indicates lower outcomes for some subjects, areas or student groups, the assessors would consider:
- information in the provider submission to explain those outcomes (as per paragraph 110)
- whether it would be appropriate for any areas with lower outcomes to affect the overall rating for student outcomes, taking into account the full range of provision.
Assessment and evidence
- Unless we have already concluded that student outcomes require improvement following an assessment of whether minimum thresholds had been met, the assessors would assess student outcomes by considering benchmarked indicators for continuation and completion, along with a set of benchmarked post-study or employment indicators. The assessors would also take account of relevant contextual factors submitted by the provider.
- The assessors would consider in the round whether the range of student outcome measures indicate whether student outcomes are below, in line with, or above what would be expected for the provider’s mix of students and courses. As part of this they would consider consistency across student groups, subjects and areas of provision.
- The provider would have the option to submit relevant contextual information about factors other than those already accounted for by benchmarking that should be considered. This could, for example, include:
- Information about the jobs that certain courses are intended to lead to, that are not classified by the Office for National Statistics as professional or managerial (and therefore would not count positively in the progression measure). To limit the burden on providers we could include information in our guidance about specific occupations where we know this occurs, or specific qualifications (such as certain higher technical qualifications) that are intended to lead to such occupations.
- Information about flexible study pathways that may result in some students progressing through their studies in ways that do not count positively in the continuation or completion measures.
- We propose to avoid inviting providers to submit other types of evidence in relation to student outcomes, such as details of their approach to delivery, their own alternative measures of student outcomes, or their own definitions and measures of educational gains. We would set out in guidance the type of contextual information that is likely to be given weight by the assessors.
Benchmarked student outcome indicators
- We propose that continuation and completion would be measured and benchmarked in the same way as in the 2023 TEF, and would continue to be aggregated over the four most recent years. These indicators would be used to understand how far the provider’s students had succeeded in their studies, taking into account their different characteristics and the nature and level of their courses of study. We expect the collection of in-year student data in future would enable us to generate and publish more timely indicators for continuation and completion. Other than bringing forward their publication, we do not expect in-year data collection would result in material changes to these measures.
- We would continue to present ‘splits’ in the data for different student and course characteristics. We would look to make improvements to the way that partnership provision is presented, to make clearer any differences in outcomes through different partnership arrangements. We would consider how to simplify or reduce the range of splits in the data, for assessment purposes.
- We propose to develop a more rounded set of post-study or employment measures that would be used to assess how far a provider’s students go on to succeed after their studies. These would also be benchmarked to take account of students’ characteristics, the subject and level of study, and regional differences that affect employment outcomes. We acknowledge that the current progression indicator does not capture all positive post-study outcomes, and we therefore propose to use several indicators, which, taken together, would provide a more rounded picture. Our initial views are that the following indicators could be generated from existing data without creating additional burden on providers, can be benchmarked, and would together provide a rounded perspective:
- The existing indicator of progression to professional or managerial employment or further study, using data from the Graduate Outcomes survey.
- A measure of graduates’ reflections on how far they are using what they learned in higher education in their post-study activity. This would also use responses to the Graduate Outcomes survey.
- A benchmarked salary measure derived from the Longitudinal Education Outcomes (LEO) dataset. We propose to consider salaries three years after graduation.[10]
- Our initial view is that a measure of how far graduates are utilising what they learned in their studies would complement the other proposed measures, by providing a view of how far their time in higher education had helped prepare graduates for their future, regardless of their type of job or salary.
- A benchmarked salary measure based on LEO data could usefully supplement the other two measures by providing a longer-term measure of student outcomes, responding to feedback that 15 months can be too soon to get a clear indication of post-study outcomes. Our initial view is that looking at salaries three years after graduation would be appropriate. A measure after, say, five years would involve an unnecessarily long time lag. Using the LEO dataset would also complement the other measures, because it covers a larger population beyond those who respond to the Graduate Outcomes survey. We propose to benchmark the data to take account of students’ backgrounds, subject of study and geography, applying the same benchmarking methodology we have established for the other indicators. In doing so we will also consider the analysis and issues raised in the report by the Institute for Fiscal Studies on ‘Using graduate earnings data in the regulation of higher education providers’.[11]
- We have considered whether there might also be benefit in developing indicators based on other responses to the Graduate Outcomes survey, for example:
- questions in the ‘Reflection on activity to date’ section about graduates’ perspectives on the extent to which their current activity fits with their future plans, or they consider it to be meaningful
- questions in the subjective wellbeing section.[12]
- Our view is that there are too many additional factors beyond a provider’s control that could impact answers to these questions, so it would not be appropriate to use such measures in ratings judgements.
- We also considered the appropriateness of a number of other potential measures, for example based on student loan repayment levels or numbers of graduates receiving benefits, but consider that these are likely to be undesirable for a variety of reasons. In some cases, they would duplicate or overlap with what we view as potentially better measures, and in others they would focus on negative student outcomes. As our regulatory approach is based on the extent to which students achieve positive outcomes, we do not consider it appropriate for measures to focus on negative outcomes.
- Subject to the outcomes of this consultation, we would include detailed proposals for the additional post-study measures, in the second consultation and any proposed changes to the definitions of existing measures. We would potentially share indicators based on proposed specifications with individual providers at that point, to inform their responses.
Provider submissions and contextual information
- We think there are advantages to streamlining the approach and basing the student outcomes assessment on comparable, benchmarked measures drawn from existing national data sources. These would show measurable outcomes achieved over the period since the last assessment. We propose that additional contextual factors (that aren’t already accounted for by benchmarking) should also be taken into account, where they relate directly to those measures.
- We think this approach would provide an efficient means of rating student outcomes based on comparable, factual information about the extent to which students had succeeded in and beyond their studies. It avoids the significant additional burden and complexity of inviting detailed evidence from each provider to supplement the indicators. The approach that we propose to take where sufficient indicator data is unavailable for a provider is set out under Proposal 9: Varying the approach for providers with limited data.
- We acknowledge that the set of proposed outcome measures does not necessarily provide a full view of the outcomes achieved by students. However, we consider that it does provide important measures that matter to students, and that taken together these are sufficient to judge how far a provider has delivered positive outcomes for its students.
- We considered whether to continue with the approach to educational gains set out in the TEF 2023 guidance, which would involve providers submitting evidence about their own definitions and measures of outcomes and educational gains. While we have heard positive feedback from some providers about assessing educational gains in the TEF, our view is that pursuing this approach for all providers in the future TEF would be overly burdensome. It would also provide less robust and comparable evidence in the assessment, if each provider defines its own measures.
- Instead, we propose to base the assessment on existing data, including a more rounded set of post-study indicators. All the student outcome indicators would be benchmarked to take account of student characteristics (including entry qualifications and socioeconomic backgrounds) and other factors. This approach would make comparable data available to both the TEF assessors and providers that takes account of each provider’s context, without creating burden.
- In making this proposal we recognise that some providers have invested effort in developing their own measures of educational gain. Our view is that a provider would have scope to set out how it develops its educational provision in ways that support the wider educational gains it intends for its students, as part of the student experience aspect. To avoid creating unnecessary burden, we do not propose this would be compulsory as part of the student experience evidence.
- These proposals respond to feedback received from providers and students as part of our evaluation of TEF 2023 and our review of panel statements to understand where additional evidence supplied by providers had been given weight by the panel. Our view is that assessing the student outcomes aspect based on comparable data will reduce the burden involved in producing submissions for providers and students, and reduce complexity for the TEF assessors. It will make it easier for users of the ratings, such as prospective students, to interpret them by improving comparability and increasing clarity about what the aspect covers.
- We are not proposing to seek evidence directly from students to inform the assessment of this aspect. Our reasons for this are twofold: as well as supporting the broader approach of the student outcomes aspect being based on comparable data, our survey of student representatives who had prepared student submissions for TEF 2023 showed that many found it difficult to comment on the features covered by the student outcomes aspect. This was because their student representation activity tended not to focus as much on student outcomes as the areas covered by the student experience aspect. We therefore consider that seeking feedback and evidence from students to inform the assessment of the student experience aspect, but not the student outcomes aspect, would be appropriate.
Question 7a
What are your views on the proposed approach and initial ratings criteria for the student outcomes aspect?
Question 7b
Do you have any comments on the proposed set of employment and further study indicators, and are there other measures that we should consider using?
Question 7c
What are your views on the proposal to consider a limited set of contextual factors when reaching judgements about this aspect?
Proposal 8: Assessment and decision making
We propose:
- that TEF assessments would be conducted by an evolving pool of academic and student assessors, supported and advised by OfS staff.
- to adopt a risk-based approach for the assessors to give further consideration, when outcomes would have a potentially negative impact on a provider.
Expert review
- The TEF assessments would continue to be carried out by assessors with expertise in the development and delivery of higher education in diverse provider contexts, and experience of being and representing students. The assessors would be appointed to make ratings decisions on behalf of the OfS. We are also considering whether OfS staff should be appointed as assessors and take part in making rating decisions.
- We consider the use of expert review a strength of the TEF that helps ensure the assessments are robust and credible. This is integral to the TEF achieving its policy intention, because providers are more likely to respect and act on the outcomes and recommendations of their assessment if they have confidence in the process.
- As discussed further under Proposal 11: Assessment cycle, we are proposing that in future the assessments would be carried out on a rolling cycle, with a cohort of up to 150 providers assessed each year. With the move to rolling cyclical assessments and the large number of providers to be assessed over an extended period, we would need to appoint an evolving pool of TEF assessors, rather than a single ‘TEF panel’. We expect we would seek to appoint assessors for a period of at least two to three years, which would both ensure overlap and consistency from one cohort of assessments to the next, and frequently bring in new assessors with expertise relevant to upcoming cohorts. As with the previous TEF, assessors would be appointed through an open recruitment process.
- We would select an appropriate group of assessors from the pool for each cohort, seeking to ensure that across the group there is an appropriate mix of skills, and that the group contains members from diverse backgrounds and with experience relevant to the types of providers being assessed in the cohort.
- We recognise there are challenges associated with recruiting a sufficient number of academic and student assessors with experience of smaller, specialist and college-based providers, and we are considering what more we could do to increase their numbers in future. We could, for example:
- offer assessors from these providers a reduced overall workload to enable their participation
- consider how to recruit assessors from these providers who have recently retired or are stepping back from full-time work
- potentially involve these assessors in advising or training other assessors.
- We propose OfS staff would manage the assessment process, and support and advise the TEF assessors. Their role would be designed to reduce burden on academic and student assessors by carrying out activities that do not rely on expertise in the delivery or student experience of higher education. Their role could include recording the assessment outcomes and drafting the assessment reports.
- We are also considering whether OfS staff should contribute to making decisions about ratings, as assessors. We envisage OfS staff would (whether as assessors or not), assess whether providers meet the minimum student outcomes requirements. This would help align decisions about the student outcomes ratings with the OfS’s decisions about breaches of condition B3.
Decision making
- The TEF assessors (who might include OfS staff) would be responsible for determining the provider’s TEF ratings. We propose a risk-based approach for the assessors to consider representations from providers, before finalising the ratings.
- We propose that all providers would be able to make representations about the factual accuracy of their assessment report prior to publication. To limit burden and costs (to both providers and the OfS), we propose that a provider would be able to make representations about the ratings only if these are provisionally Bronze or Requires improvement. This is because these ratings would have the most significant impact on providers. Silver ratings would be considered a positive outcome, and we consider the costs would outweigh the benefits of considering representations for these ratings.
- Only OfS staff would be responsible for any decisions relating to breaches of conditions and any regulatory interventions. Depending on the ratings, OfS staff would consider:
- potential further engagement or investigation, if the student experience has been rated Requires improvement
- whether there has been a breach of condition B3, if student outcomes have been rated as Requires improvement
- whether there is an increased risk of a future breach of a condition (this could relate to a provider rated as Bronze or Requires improvement)
- whether a regulatory intervention would be appropriate to ensure improvements are made.
- If the provider has been rated as Requires improvement, we would also consider what an appropriate timeframe would be before it should be reassessed. We are considering whether there should be an option for a provider rated Requires improvement to have a targeted reassessment in some circumstances, rather than a full reassessment, focusing on the specific issues of concern and whether they have been adequately addressed (in which case a rating of Bronze could be awarded).
- Decisions by OfS staff relating to breaches and interventions would be made in accordance with the OfS scheme of delegation. The range of options that we are likely to consider is discussed under Proposal 13: Incentives and interventions.
Question 8a
What are your views on who should carry out the assessments? You could include suggestions for how we can enable more assessors (both academic and student) from small, specialist or college-based providers to take part.
Question 8b
What are your views on only permitting representations on provisional rating decisions of Bronze or Requires improvement?
Proposal 9: Varying the approach for providers with limited data
We propose to:
- use an alternative means of gathering students’ views, where we do not have sufficient statistical confidence in the NSS-based indicators for a provider.
- not rate the student outcomes aspect where we do not have sufficient statistical confidence in the student outcomes indicators for a provider.
- Our proposals include the use of NSS indicators to inform the assessment of student experience, and that we will base the assessment of student outcomes primarily on a set of student outcome indicators. We are aware that for some providers the indicators are unavailable, or there is insufficient statistical certainty to use them in this way. There can be gaps due to suppression because of low numbers, limitations in survey coverage or low survey response rates, and the data that is available may have very wide confidence intervals.
- We are therefore considering, and would welcome feedback on, what approach should be taken in these cases to help ensure the future TEF works effectively across the diversity of provider types and sizes. We discuss each of the two aspects in more detail below, but in summary we propose to vary the approach as follows:
- For the student experience aspect: Where we do not have sufficient statistical confidence in the student experience indicators for a provider, we propose to gather students’ views through alternative means. This might involve online meetings or commissioned focus groups with students. We would use the evidence we gather to supplement available indicator data, or in place of indicator data where there is none.
- For the student outcomes aspect: Where we do not have sufficient statistical confidence in the student outcomes data for a provider, we propose that we would not rate the student outcomes aspect. This would not impact on the proposed overall rating, and it would be presented neutrally.
- The effect of these proposals would be that:
- All providers would receive a rating for student experience. For the large majority, this would be based on the provider submission, the NSS indicators, and other direct input from students. For a minority of providers, the insufficient NSS indicators would be supplemented with an alternative approach to gathering student views.
- A large majority of providers would be rated for student outcomes. A minority would not be so rated. Based on initial assumptions and analysis, we estimate this to be between 10 and 15 per cent of providers (see Annex G for details). We consider this approach to be appropriate for the reasons set out at paragraph 150. Under Proposal 4, we propose an overall rating based on the lowest aspect rating. Where a provider is not rated for student outcomes, its overall rating would be the same as its student experience rating.
- In the previous TEF, if a provider’s indicators were uncertain or unavailable, the onus was on the provider to supply alternative evidence of its own. Feedback from the TEF 2023 evaluation and recent engagement about the development of the integrated system suggested that providers in this position (often smaller providers) found it challenging to devote sufficient resource to producing detailed submissions and presenting alternative forms of evidence. There was additional complexity for the TEF panel in weighing up these alternative forms of evidence. We consider therefore that varying the approach in the specific circumstances described above would reduce burden both for providers with uncertain or unavailable data, and for the TEF assessors.
Student experience
- Our proposal to vary the approach for the student experience aspect is aimed at obtaining alternative or supplementary student views, in cases where we do not have sufficient NSS indicator data. In these cases, the provider would still supply evidence in a written submission which addresses assessment criteria for the aspect. We are considering what kinds of alternative mechanism would be most useful, for example online meetings or commissioned focus groups with students.
- For the student input to be broadly comparable with insights from the NSS, we propose that the range of topics covered should be similar to the themes covered by the NSS, and that views should be gathered largely from final year undergraduates. This could include students on courses that are too short to be included in the NSS. We would need to consider how to gain broadly representative input, although the need for alternative student input typically arises in smaller providers. One approach could be to have online meetings or focus groups with a mixture of final year course representatives and other final year students selected randomly from across different courses.
- To determine which providers would need alternative student input, we would produce a clear definition of what we would consider to be sufficient NSS data to inform the assessment, based on the data available for each provider. We propose to take this approach, rather than setting a standard student number or NSS response threshold, so that we can maximise use of the available NSS data. Our initial view is that a definition could be based on the majority of the overall NSS indicators for a provider covering a significant proportion of the provider’s students, and having at least strong statistical evidence about the provider’s performance against its benchmarks. Further details of this are set out at Annex G. If a provider’s NSS data is unavailable or does not meet this definition, we would gather student views through an alternative means.
- We propose that, once the definition is established, it should be applied consistently to all providers rather than taking into account each individual provider’s preferences. This means there would not be scope for other providers with sufficient NSS indicators to request the alternative or supplementary student input. (Note that the alternative to a written student submission would still be available to all providers, as set out under Proposal 10: Student evidence and involvement).
- We are considering what actions we could take to improve levels of statistical confidence and coverage of NSS data indicators, for more providers. We do not expect that lowering the response rate publication threshold would have a great impact, but we intend to look at the potential effect of this, as well as considering ways of increasing response rates in particular contexts, with the aim of improving confidence in the data. We would also explore extending the coverage of the survey in future, especially once in-year data becomes available and could enable the inclusion of students on shorter courses. We are also considering whether there are ways of making use of the NSS qualitative comments to inform the assessments.
Student outcomes
- As explained under Proposal 7: The student outcomes aspect, we propose that the student outcomes aspect in future would be assessed based only on OfS data and relevant contextual information. We propose that, where OfS data is statistically uncertain or unavailable, the provider would not be assessed and rated for student outcomes. This is because we consider that trying to assess student outcomes based on the provider’s own alternative evidence would create a substantial additional burden for providers and for assessors, and it would not enable the assessors to make judgements about student outcomes on a comparable basis for these providers. Under Proposal 7, we set out in more detail our reasons for not basing the student outcomes rating on alternative evidence from the provider.
- We would produce a clear definition of what we would consider to be sufficient data to be assessed and rated for student outcomes, based on the data available for each provider. We propose to take this approach, rather than setting a standard student number threshold, so that we can maximise use of the available data. Our initial view is that a definition could be based on having at least strong statistical confidence (90 per cent or greater) in a provider’s performance against its benchmarks for the overall continuation indicator, plus at least one of the other student outcomes indicators (completion, progression or one of the other proposed post-study indicators). We propose this approach because we consider continuation to be an especially important indicator of whether students are succeeding. Students continuing their courses of study is a prerequisite for other outcomes later on in the student lifecycle, and this measure involves a shorter time lag than other outcome indicators. We propose there should be at least one other indicator with strong statistical confidence, so that the rating would not be based on a single measure of performance. Further details of this proposal are set out at Annex G.
- If a provider has not been rated for the student outcomes aspect, we would seek to present this neutrally in the publication of outcomes, as discussed further in Section 4: Published outputs of the overall system.
Worked examples of Proposal 9
Provider A does not have sufficient NSS indicator data. It does have sufficient student outcomes data. We would deploy the alternative means of gathering student views to supplement any available NSS data. The provider would be assessed and rated for both the student experience and student outcomes. Its overall rating would be the lower of the two.
Provider B has sufficient NSS indicator data. It does not have sufficient student outcomes data. The provider would be assessed and rated for the student experience. It would not be assessed or rated for student outcomes. Its overall rating would be the same as its student experience rating.
Question 9a
What are your views on our proposal for an alternative means of gathering students’ views to inform the student experience aspect where we do not have sufficient NSS-based indicators? You could include comments on:
- the proposed approach to determining whether the NSS data is sufficient (this is expanded on in Annex G)
- the actions we are considering to improve the availability of NSS data for more providers
- how student views could be gathered through an alternative means.
Question 9b
What are your views on our proposal not to rate the student outcomes aspect where we do not have sufficient indicator data? You could include comments on the proposed approach to determining whether the data is sufficient (this is expanded on in Annex G).
Proposal 10: Student evidence and involvement
We propose to include direct student input in the assessment of the student experience aspect for all providers, and to expand the range of student assessors.
- In the most recent TEF, the opportunity for student involvement was greater than before, in particular through the introduction of the independent student submission. This gave students a valuable opportunity to have their voices heard, and they fed back to us that being involved had helped them influence positive changes at their providers, and strengthened their voices in discussions about teaching and learning.
- Students also took part in the most recent TEF as full and equal members of the TEF panel, bringing direct experience of being and of representing students to bear in the judgements.
- For the future TEF, we propose to build on and strengthen student involvement further, and we set out below some options for how we might achieve this.
Student evidence
- Students’ perspectives would be central to the evidence used to inform judgements. Evidence would be gathered directly from a provider’s students through:
- Students’ responses to the NSS. This would be a key part of the evidence that informs the assessment of the student experience aspect; and where this data is not sufficient we propose alternative means of gathering students’ views on similar themes.
- Graduates’ responses to the Graduate Outcomes survey, reflecting on how far they are utilising the skills they learned in higher education (see Proposal 7). This would be one of the measures informing the assessment of student outcomes.
- Independent student submissions made on behalf of a provider’s student body or, where this is impractical, an alternative option discussed below.
- In the previous TEF, the student submission was optional and provided evidence related to both the student experience and outcomes aspects. For the future TEF, we propose that:
- we would seek direct and independent student input for all providers, whether through a student submission or alternative option (such as a commissioned focus group)
- the focus would be on the student experience, and would not cover the student outcomes aspect.
- It remains our view that there is value in the student experience aspect being informed by an independent student submission (or alternative student input) in addition to the NSS indicators. For example, this could:
- Supplement the themes covered by the NSS indicators, by highlighting or emphasising specific issues that are important to the provider’s students.
- Supplement the quantitative NSS indicators with additional explanation or context, from the student perspective. (For example, in setting out how students are involved and engaged by the provider in the development of provision, and how students perceive the provider has responded.)
- Include views from students not reflected in the NSS indicators (which covers final year students on courses over a year in length). This could include views from students in earlier years of study, those on shorter courses, and those still studying at the provider.
- We are proposing that the student submission would no longer cover student outcomes, in part because of our proposal that the student outcomes aspect in future would be based on a set of benchmarked indicators and revised contextual information (see Proposal 7: The student outcomes aspect). We also consider that revising the scope of the student submission in this way could have the benefits of reducing the burden of evidence collection on students, and ensuring the student evidence relates to those areas of the assessment where it might have the most impact. We learned from our evaluation of the previous TEF that student representatives had found it more challenging to provide evidence related to the outcomes of previous students no longer studying at the provider. By focusing on the student experience, we would still invite students’ views on how well they consider providers are developing their skills and preparing them for their futures.
- We would aim to gather independent student input for all providers. While it would remain optional for a provider’s students to make a student submission, we would encourage them to do so. We know from operating the previous TEF that in some cases it could be challenging or impractical for a provider’s students to produce a submission, particularly where student representation structures are less formal or less well developed. So we propose there should be some alternative means of gathering students’ views where this is the case.
- One option could be for the OfS to commission focus groups with the provider’s students. We would welcome views on this or other possible options. We also welcome views on whether there are benefits to retaining the option students had in the previous TEF of producing submissions in a non-written format.
- As discussed under Proposal 9: Varying the approach for providers with limited data, we are proposing that if the NSS indicators for a provider are insufficient, the OfS would gather students’ views through online meetings or commissioning focus groups with students. We have suggested under that proposal to focus on gathering views on areas covered by the relevant NSS themes, from final year students. Where the NSS data is insufficient and students do not make an independent submission, we could broaden the scope of this to cover other areas that students may wish to comment on and include students from other years of study.
Student assessors
- The views of students are also embedded in the TEF through their role as student assessors (see Proposal 8: Assessment and decision making). As part of the TEF assessment teams, students (those with recent experience of being and representing students) would jointly assess and decide the ratings for providers alongside academic assessors. As with academic assessors, we would anticipate appointing student assessors for two to three years.
- In previous TEF exercises we recruited fewer student assessors from small, specialist and college-based providers. We would welcome views about how we can increase their numbers in future, including, for example by offering student assessors from these providers a reduced overall workload to enable their participation.
Involvement in other OfS quality assessments
- As well as informing the TEF assessments, student views form an important part of other OfS quality assessments. For example, quality assessments for providers applying to register with the OfS, or applying for degree awarding powers (DAPs), or where we investigate concerns about compliance with the B conditions, will normally include a visit to the provider where the views of students are sought through meetings with the assessment team.
- In addition, students currently serve as members of the OfS’s Quality Assessment Committee, which plays a key role in advising the OfS on quality assessment matters, including advice on DAPs assessments.[13]
Question 10a
What are your views on our proposed approach to including direct student input in the assessment of the student experience aspect for all providers? You could include comments on alternative ways of gathering student input where student submissions are impractical.
Question 10b
How could we help enable more student assessors from small, specialist and college-based providers to take part?
Proposal 11: Assessment cycle
We propose to:
- assess each provider for the first time within three years, according to a set of priorities
- link the timing of further assessments to the ratings awarded and our ongoing risk monitoring.
- The TEF has previously been a periodic exercise, with a single assessment point for all participating providers. It would be impractical to operate this way in future if we extend assessments to all providers and integrate B3 assessments. A single periodic exercise would also limit our ability to respond rapidly to emerging risks, or vary the frequency of assessments according to risk. We propose instead to carry out assessments on a cyclical basis, assessing a cohort of providers each year. This would enable the system to be more dynamic and risk-based, and more practical to deliver.
- This section proposes how we would schedule providers for their first assessment, and how the cyclical process could operate after that.
- We acknowledge there would be some challenges in moving away from a single periodic exercise to a rolling cyclical approach, including:
- How to ensure reasonable consistency in judgements across cohorts of providers assessed in different years. We intend to address this through maintaining continuity in TEF assessors across years of the cycle (see Proposal 8: Assessment and decision making) and deploying mechanisms for calibration of judgements.
- How to communicate outcomes during a transitional period, in which some providers would hold an award from the previous scheme while others have ratings from the new scheme. This is discussed in Section 5.
- This section sets out our proposals that all providers taking part in the first cycle would be assessed within three years (from 2027-28 to 2029-30), and the frequency of their assessments after that would vary according to their rating and ongoing risk monitoring.
First assessments
- We have considered the impact of other proposed changes, such as the integration of B3 assessments and additional evidence-gathering for providers with limited NSS data, on the time and resource needed for assessments under the new scheme. We currently estimate that we could assess all providers for the first time over a cycle of three years. This would depend on the final assessment method and on recruiting a sufficient range of assessors.
- Rather than setting out a schedule for every provider’s assessment across the full three-year cycle in advance, we propose to each year to select a cohort of providers to be assessed in that year. Each year we would inform the selected providers approximately six months before their submission deadline. This is longer than the time providers had to prepare their submission for the 2023 TEF. We are proposing this approach because it will allow us to prioritise providers for assessment on an annual basis, in response to changes in indicators or identification of emerging risks. It will also allow us to take account of significant events that the provider may not have been aware of at the beginning of the three-year cycle.
- We propose to prioritise when to schedule providers for their first assessment in a way that avoids existing TEF ratings being in place for an extended period. In particular, because we are proposing that a Bronze rating in future would have a different meaning from the previous TEF, we would prioritise assessing all providers with a 2023 TEF Bronze rating in the first year of the new scheme. We propose to also take account of other factors including:
- increased risks to quality (as set out in the quality monitoring tool described in Proposal 12, or material changes in a provider’s TEF indicators)
- allowing providers to access the benefits of holding a TEF rating (for providers that do not currently have TEF ratings, or that have Requires improvement outcomes)
- the benefits of assessing a diverse mix of providers each year (in terms of indicator performance and other provider characteristics)
- whether any significant events would suggest we should not select a provider for assessment in a given year, for example a structural change or merger.
- In practice, we could take account of these issues by scheduling the assessments broadly as shown in Table 2.
Table 2: Schedule
Year | Providers to be assessed |
---|---|
Year 1 |
All providers with an existing Requires improvement or Bronze TEF rating. Some providers with an existing Silver or Gold TEF rating (prioritising those with a Bronze aspect rating, with concerns raised in a previous assessment of compliance with our B conditions, or with increased risk indicators or declining TEF indicators). Some providers without an existing TEF rating (prioritising those that want to take part in year 1, and those with increased risk indicators). |
Years 2 and 3 (2028-29 and 2029-30) |
All remaining providers with an existing Silver or Gold TEF rating. All remaining providers without an existing TEF rating. |
- We will need to consider how to balance the range of factors we consider. We are interested in views on whether there are additional factors we should consider, and what circumstances might represent a significant event meaning that we should avoid scheduling a provider for assessment in a given year.
- We will also explore the sequencing of TEF assessments and APP approvals, and consider how the sequencing enables providers to access any higher fee limit associated with TEF ratings in a timely way. We acknowledge there could be some overlap in the information a provider would include in its APP and its TEF submission. We are interested in views about how to reduce this overlap, as well as the sequencing of TEF assessments and APPs: for example, whether there are potential benefits or efficiencies for providers in carrying out both in the same year, whether this should be avoided, and whether there are advantages to sequencing them in any particular order.
- We are aware that many providers are due to make submissions to the Research Excellence Framework in autumn 2028. Providers are also subject to other assessments, for example by Ofsted. We consider that a number of our proposals are intended to minimise the burden of preparing TEF submissions, and we consider it reasonable that providers could be expected to meet multiple demands from regulatory and funding bodies in a given year. We do not therefore propose to avoid scheduling a provider’s TEF assessment in the same year as another type of assessment by another body.
Frequency of further assessments
- Following its first assessment, we propose to base the frequency at which a provider is assessed on its rating. We propose to reassess providers with Gold ratings after five years; those with Silver ratings after four years; and those with Bronze ratings after three years. If a provider has been rated as Requires improvement, we would consider what an appropriate timeframe would be before the provider should be reassessed.
- This approach would reduce burden for those delivering the highest levels of quality, and would allow more regular scrutiny of those with lower ratings. It would also mean that providers with lower ratings would have a scheduled opportunity to increase their ratings, and to derive the associated benefits, sooner than if we assessed all providers equally often.
- This approach would also give providers predictability about the timing of their next assessment, although we would retain the ability to assess a provider sooner if our ongoing monitoring identified increased risks to quality. There may also be other circumstances in which it would be appropriate to change the timing of a provider’s next assessment, for example if sufficient indicator data becomes available to rate student outcomes, or in the case of significant events such as a merger or structural change. We would also consider whether there might be circumstances in which we might consider extending a provider’s award for longer than initially granted.
- We would take a case-by-case approach to scheduling subsequent assessments for a provider rated as Requires improvement. This would allow us to balance adopting an appropriate level of scrutiny with ensuring that there was a realistic possibility of the provider having made improvements, while not unnecessarily delaying the provider an opportunity to gain a rating.
Question 11a
What are your views on our proposed approach to scheduling providers for their first assessments? You could include comments on:
- the factors we should consider in scheduling assessments
- any types of significant events that should lead us not to schedule an assessment in that year
- the sequencing of TEF assessments and APP approvals.
Question 11b
What are your views on our proposed approach to scheduling providers for subsequent assessments?
Notes
[3] The remaining providers did not return any undergraduate or postgraduate students in 2023-24.
[4] This would cover students studying only modules, whether or not they are funded by the Lifelong Learning Entitlement.
[5] See YouthSight, ‘Assessing student perceptions of proposed TEF naming and rating options’, a report to the OfS.
[6] The five NSS-based indicators used in the previous TEF related to the following themes: ‘The teaching on my course’, ‘Assessment and feedback’, ‘Academic support’, ‘Learning resources’ and ‘Student voice’.
[7] We plan to publish an integrated version of the existing student outcomes and TEF data dashboards for comment early next year, independently of this proposed change and in response to feedback from dashboard users.
[8] See Annex C of OfS, Regulatory advice 20: Regulating student outcomes.
[9] Available at OfS, Regulatory advice 20: Regulating student outcomes.
[10] The LEO dataset joins education records to tax and benefits data. This shows whether graduates are employed and how much they are paid.
[11] See Gov.UK, Using graduate earnings data in regulation of higher education providers.
[12] These questions explore concepts such as the extent to which graduates are satisfied with their lives and feel that what they are doing is worthwhile, and the extent to which they feel happy or anxious.
[13] For more information on the Quality Assessment Committee, see OfS, Who we are: Quality Assessment Committee.
Describe your experience of using this website