Frequently asked questions about the student outcome and experience data dashboards

Questions relevant to all dashboards

The definitions of the student outcome and experience measures used throughout these dashboards are aligned to our consultation proposals for the Teaching Excellence Framework (TEF) and condition B3 assessments and the evidence that they will draw upon. We have now published our analyses of responses to the consultations on student outcomes and teaching excellence. The data and dashboards currently shown here reflect our consultation proposals and have not been updated to reflect the outcomes of the consultations.

In addition to the consultation proposals describing definitions of each measure, we have published a methodology document which describes the construction of the proposed data indicators and provides more detailed information of some of the terms used within these dashboards.

We have also utilised statistical methods when presenting the proposed indicators within the dashboards and these are described in further detail in our statistical methods document.

All proportions are rounded to one decimal place. This applies to all indicator, benchmark and difference values included in the data dashboards illustrating the presentation of the student outcome and experience indicators we will publish for each provider each year. It also applies to all estimated differences and rates reported in the exploring student outcomes dashboard and the comparing completion measures dashboard.

Student numbers are reported as counts rounded to the nearest 10 (or in the exploring student outcomes dashboard, the nearest five).

Student numbers are suppressed where there are fewer than 23 students (prior to rounding) in the chosen category.

These thresholds were chosen to retain as much information as possible, while ensuring that information about individuals cannot be identified from the data.

The dashboards include data up to the 2019-20 academic year. There may be some expectation that these statistics will reflect changes due to the coronavirus (COVID-19) pandemic.

This is particularly relevant for measures of progression, which use data from the Graduate Outcomes survey. While the graduates surveyed were not the students who graduated into the pandemic, the coronavirus pandemic will, for many graduates, have set the scene for an important early stage of their careers.

The 2018-19 survey was undertaken over four different stages: some were surveyed before the pandemic had been declared, and others when different levels of restrictions were in place. However, it is unclear as to whether the changes to results compared with the 2017-18 survey results are directly attributable to the pandemic.

Since the first lockdown in the UK was not declared until late March 2020, there was limited chance for this to impact the number of entrants between September 2019 and August 2020. Our measures are currently reporting the most recent continuation outcomes as those which have been evaluated on the basis of a student’s activities in the 2019-20 academic year (one year after commencement for full-time students and two years after commencement for part-time students). 

Most standard academic years will have begun in the autumn of 2019 for this data reporting period, well before the pandemic, so it would not have impacted on the most recently collected continuation rates.

The student outcome and experience measures covered by these dashboards are those within scope of our recent consultations on the approach to regulating student outcomes and the TEF.

Provider-level data reporting on measures of access and attainment are reported through the access and participation data dashboard.

In future, the exploring student outcomes analysis could be extended to other measures of the student lifecycle, such as access and attainment.

We published a consultation about our approach to regulating student outcomes and the TEF, which closed on 17 March 2022. The consultation outcomes were published in July 2022. The data and dashboards currently shown here reflect our consultation proposals and have not been updated to reflect the outcomes of the consultations. These dashboards and the analysis they report should aid stakeholders in understanding different aspects of the approach we will use to regulate student outcomes.

The example dashboards illustrate our proposed approach to calculating and publishing the data used to inform our assessments. The other dashboards report on analysis that has been used for setting numerical thresholds for student outcomes, but we anticipate that they also have wider uses.

The definitions of student outcome and experience measures used in this analysis are directly aligned to our consultation proposals for the construction of student lifecycle indicators to be used in condition B3 assessments and the TEF. We have now published our analyses of responses to the consultations on student outcomes and teaching excellence. The data and dashboards currently shown here reflect our consultation proposals and have not been updated to reflect the outcomes of the consultations.

These dashboards are published to support our consultations on regulating student outcomes and the TEF. We welcomed feedback about these dashboards as part of consultation responses. Please note: these consultations closed on 17 March 2022 and the consultation outcomes were published in July 2022.

We are also interested to hear any feedback you may have on the presentation of this data more generally. To share your thoughts, please contact [email protected]. Alternatively, if your feedback is related to the exploring student outcomes data, please contact [email protected]

Questions relevant to the exploring student outcomes dashboard

There are eight student populations for which three student outcomes (continuation, completion and progression) are analysed in this publication. These populations are given by all possible combinations of the three categories below:

  • student domicile: UK domiciled or non-UK domiciled
  • level of study: undergraduate or postgraduate
  • mode of study: full-time, part-time.

For each population and outcome, we have aggregated the four most recent years of available data (or two years for progression, using the available Graduate Outcomes (GO) survey data) to maximise statistical power and borrow strength across years.

  • For full-time continuation, we considered entrants between 2015-16 and 2018-19.
  • For part-time continuation, we considered entrants between 2014-15 and 2017-18.
  • For full-time completion, we considered entrants between 2012-13 and 2015-16.
  • For part-time completion, we considered entrants between 2010-11 and 2013-14.
  • For both full-time and part-time progression, we considered graduates from the 2017-18 and 2018-19 cohorts, using all the available data from the Graduate Outcomes (GO) survey.

Data is not available across all populations for certain characteristics in this analysis. For example, ethnicity information is limited to UK domiciled students. When this is the case, the dashboard will prompt you to select a characteristic for which data is available, or change the population specified by the filters.

The coverage of each characteristic is summarised in Table A1 of the Exploring student outcomes report.

The charts associated with this release present percentage point differences in continuation, completion and progression rates between one group of students and a chosen reference group, after controlling for a given set of factors.

The chosen reference group is shown above each chart. This indicates the group of students against which the outcomes of other student groups are compared.

The bar labelled ‘no other factors (actual difference)’, shows the actual difference in outcomes between the chosen student group and this reference group, averaged over the four years of data (or two years for progression).

Instead of indicating uncertainty arising from the statistical models, the confidence intervals around these bars represent the sensitivity of the difference to the effects of random variation in the outcome being measured. A large confidence interval typically indicates that there are small numbers of students informing the measure and it is advisable to be cautious when interpreting these values.

As an example, consider the difference in continuation rates of male (88.2 per cent) and female (90.9 per cent) UK-domiciled full-time undergraduates, which amounts to 2.7 percentage points.

Our analysis shows that after controlling for differences in the Level 3 qualifications (such as A-levels) held by male and female students on entry to their course, the difference in continuation rates reduces to an estimated 2.4 percentage points. In other words, while differences in the entry qualifications of male and female students account for some of the difference in continuation rates, there remains a difference of 2.4 percentage points even after controlling for this.

In fact, after controlling for all other observed factors, from equality characteristics to the types of study undertaken, while the difference does reduce further, it remains at an estimated 2.1 percentage points. This remaining difference is determined by a combination of:

  • genuine differences in the current student experience between male and female students, where one group faces barriers to continuation that the other does not; and
  • other unobserved factors that differ between male and female students, which are also related to the likelihood of continuation, but have not been controlled for in the statistical model.

It is important to note that this method can only account for observed factors which are included in the model. For example, if female students are generally more motivated than male students (which we cannot observe in the data) and if more motivated students are more likely to continue with their studies, then the difference between male and female students after controlling for all other factors would still not represent the true effect of sex on continuation rates – some of the difference would simply reflect the effect of motivation on continuation instead.

The differences are estimated from statistical models with uncertainty, which is indicated by 95 per cent confidence intervals on the charts. These indicate how much (observable) uncertainty there is around a given statistic; we would expect the true value of the statistic to lie between the intervals 95 per cent of the time, given the data in front of us. In other words, we are 95 per cent ‘confident’ that the true value lies between the two intervals.

When multiple statistics are calculated on a given topic, it is often expected that users will wish to make comparisons between those statistics. To the extent that those statistics include information about statistical uncertainty, that uncertainty can be underestimated depending on the nature of the multiple comparisons that are being made. For example, in the case of 95 per cent confidence intervals, the likelihood that the computed confidence interval includes the true value of underlying performance may be substantially lower than the intended 95 per cent if multiple comparisons are being made: if you were making a comparison across 20 data points without a multiple comparison adjustment, on average one of those data points (5 per cent of the 20) may appear to be statistically significant when it is not. To overcome this, adjustments can be made to the calculations to control the error or false discovery rates (such as the Bonferroni correction). To align with the proposals in our construction of student outcome and experience indicators to inform the assessment of condition B3 or the TEF, we have not made any such adjustments for multiple comparisons within this analysis. For more information on this proposal, see paragraph 29 of the supporting document, Description of statistical methods.

While we have proposed not to adjust for multiple comparisons, we do ask users who wish to make multiple comparisons to exercise caution when making their judgements because of the higher risk of false discovery when using lower levels of statistical confidence.

For more information, see Annex A of our Exploring student outcomes report.

As a rule of thumb, if the confidence intervals around a particular difference do not overlap with zero, there is sufficient evidence to suggest that the gap is statistically significant at the 95 per cent level, i.e. it is different from zero.

However, to align with our proposed approach to the construction of student outcome and experience indicators to inform the assessment of condition B3 or the TEF, we have made no adjustment for multiple comparisons when calculating these confidence intervals. For more information on this proposal, see paragraph 29 of the document, ‘OfS institutional performance measures: Description of statistical methods’. 

It is important to note that the 95 per cent significance level was primarily chosen to be illustrative of the observable statistical uncertainty. It also provides a tolerance of ‘Type II’ errors that suits our uses on this occasion, based on our expert judgement.

This data is reported at the sector level only. In our January 2022 B3 consultation, we proposed to use this sector-level information in setting numerical thresholds that will be used to test whether individual providers are delivering positive outcomes for their students. The analysis would enable us to control for the various relationships between student outcomes and student characteristics, as identified at the sector level, when setting our proposed numerical thresholds.

The charts associated with this release present percentage point differences in continuation, completion and progression rates between one group of students and a chosen reference group, after controlling for a given set of factors.

These differences are reported across the following student characteristics or ‘split indicators’:

  • age on entry
  • disability type
  • ethnicity
  • Index of Multiple Deprivation (IMD)
  • Tracking underrepresentation by area (TUNDRA)
  • sex
  • free school meals (FSM) eligibility.

These characteristics were chosen because they align with the ’split indicators’ that we have proposed to use in condition B3 and in the Teaching Excellence Framework (TEF) in our January 2022 consultations.

In other analyses, we additionally include Associations Between Characteristics of Students (ABCS) measures as split indicators. However, given that ABCS measures are themselves derived from statistical modelling of student outcomes given a set of student characteristics, they are not included in this analysis to avoid duplication of information in the fitting of the statistical models.

Separately, we also had to determine which other factors to control for when estimating differences in outcomes across the split indicators listed above. From the outset, we considered as comprehensive a list of factors as possible. We then carried out exploratory analysis of the relationship between each candidate factor and each student outcome within each population. The formal criteria against which these factors were assessed is summarised below:

  • sufficient data quality and coverage
  • correlation with the outcome
  • no strong correlation with other factors
  • Graduate Outcomes response rates (progression only).

For more information about these criteria and the list of factors considered from the outset, see Annex A of our Exploring student outcomes report.

One of the factors included in the statistical models is the higher education provider where the student is registered.

There are other provider characteristics which we might expect to correlate with the outcomes of its students, such as the number of students registered, the ‘mission group’ of the provider or the average tariff scores of its entrants. We have decided not to control for this information when modelling student outcomes, as it is not our intention to explain student outcomes through characteristics which are within the provider’s control.

This analysis seeks to identify factors associated with continuation, completion and progression, and better understand the extent to which differences in these outcomes can be accounted for by other underlying differences in student characteristics.

This will allow providers to develop appropriate strategic measures within their access and participation plans which seek to address these observed differences in student outcomes.

Questions relevant to the other dashboards

In using the dashboards, you’ll see reference to the following terms and features of the student outcome and experience measures:

  • Denominator of the indicator: The total number of students in the population for which we are measuring outcomes or experiences.
  • Numerator of the indicator: The number of students who achieve the outcome or experience in question.
  • Indicator value (as a proportion): Calculated in percentage terms as the numerator divided by the denominator. This is the rate at which students have achieved the outcome or experience in question, expressed as a point estimate providing a factual representation of the actual population of students present at a particular provider at a particular time.
  • Benchmark value (as a proportion): Calculated in percentage terms for each provider as a weighted sector average which takes account of that provider’s particular mix of students. Benchmarks give information about the values that the sector overall might have achieved for the indicator if the characteristics included in the benchmarking factors are the only ones that are important.
  • Difference between indicator and benchmark: This is a point estimate of the difference between the indicator and benchmark (expressed as indicator minus benchmark).
  • Contribution to the benchmark: Calculated in percentage terms for each provider as the weighted average of the provider’s own students contributing to the sector averages that are used to calculate their benchmark.
  • Response rates (for progression outcomes and student experience measures): Calculated in percentage terms as the number of students who responded to the relevant survey, divided by the total number of students eligible to be surveyed.

Each dashboard contains a yellow ‘Help guide’ button in the top right, providing key points for how to navigate and use the dashboard.

On the webpage for each dashboard you will also find instructions for applying the filters and information about the choices available when exploring the data.

Throughout the dashboards, student outcome and experience measures are presented with ‘shaded bars’ to represent the statistical uncertainty associated with observed values.

There are two observed values that have been proposed for use in these assessments, and we will show a shaded bar in respect of each case:

  • The observed value of the indicator as a point estimate, reporting the proportion of students that we observe to have achieved a certain outcome or reported a certain experience. We refer to this as a measure of the provider’s absolute performance.
  • The observed value of the difference between the indicator and its associated benchmark, as a point estimate. We refer to this as a measure of the provider’s relative performance.

The shaded bars that we are showing are illustrated below. They are constructed around the point estimate, shown as black vertical lines, and aim to represent the continuous spread (or distribution) of statistical uncertainty around the point estimates that we have calculated. The shading of the bars indicates the changing likelihood that underlying provider performance takes different values, with the darkest shading representing the range with the highest likelihood of containing the provider’s true underlying provider performance. Much like the bell curve of a Normal distribution, as the shading lightens in both directions it represents a lower likelihood that true underlying performance falls in that range.

The two bars are differentiated by colour to represent the different interpretations of performance. The spread of statistical uncertainty associated with the absolute performance is represented in a green shaded bar, whereas that associated with the relative performance is represented in a blue shaded bar.

A more detailed, technical description of the statistical methods used to create the shaded bars is available within the supporting documentation published alongside the constructing student outcome and experience indicators consultation. The proposed interpretation of these bars for the specific purposes of TEF and baseline quality assessment are described within the relevant consultation documents.

We have proposed that in regulation of student outcomes we will make use of the shaded bars by establishing the confidence with which we can say that the underlying performance provider is above or below a numerical threshold.

To facilitate consistent interpretations of this confidence by users, we have summarised the proportion of the distribution represented by the shaded bar that falls above or below those thresholds. These summary figures are reported in a supplementary table alongside the shaded bars, with the intention that the two are used together to inform an accurate and consistent interpretation of statistical confidence related to the thresholds that the OfS have proposed to make use of.

These summary figures are highlighted where they report that at least 75 per cent of the distribution represented by the shaded bar falls above or below those thresholds.

For each type of shaded bar, green or blue, the student outcomes dashboard reports on:

  • Green bar (absolute performance):
    • proportion of the uncertainty distribution below the numerical threshold
    • proportion of the uncertainty distribution above the numerical threshold
  • Blue bar (relative performance):
    • proportion of the uncertainty distribution below the benchmark
    • proportion of the uncertainty distribution above the benchmark.

A more detailed, technical description of the statistical methods used to create the shaded bars is available within the supporting documentation published alongside the constructing student outcome and experience indicators consultation. The proposed interpretation of these bars for the specific purposes of regulating student outcomes is described within the related consultation documents.

We have proposed that TEF assessment will make use of the shaded bars by establishing the confidence with which we can say that the underlying performance provider is materially above or below the provider’s benchmark.

To facilitate consistent interpretations of this confidence by users, we have summarised the proportion of the distribution represented by the shaded bar that falls above or below the values that define those materiality areas. These summary figures are reported in a supplementary table alongside the shaded bars, with the intention that the two are used together to inform an accurate and consistent interpretation of statistical confidence related to the thresholds that the OfS have proposed to make use of.

These summary figures are highlighted where they report that at least 75 per cent of the distribution represented by the shaded bar falls within a given materiality area.

The TEF dashboard reports on:

  • Green bar (absolute performance). No supplementary information is provided.
  • Blue bar (relative performance):
    • proportion of the uncertainty distribution in the materially below benchmark area, with the value that defines this area set at -2.5
    • proportion of the uncertainty distribution in the broadly in line with benchmark area, between the values that define this area of -2.5 and 2.5
    • proportion of the uncertainty distribution in the materially above benchmark area, with the value that defines this area set at 2.5.

For the continuation measure in the TEF dashboard, where the benchmark is at 95 per cent or higher, we also colour the point estimate of the shaded bar yellow and highlight the benchmark yellow.

A more detailed, technical description of the statistical methods used to create the shaded bars is available within the supporting documentation published alongside the constructing student outcome and experience indicators consultation. The proposed interpretation of these bars for the specific purposes of TEF is described within the related consultation documents.

All of the data has been calculated on unrounded values. In order to prevent the disclosure of sensitive data, we have subsequently applied rounding and suppression rules as follows:

  • [low]: where there are fewer than 23 students in the denominator.
  • [N/A]: where the data item is not applicable to that population. For example, benchmark data is not applicable to the sector rates (*All OfS registered providers option in the dashboard) and these are shown as [N/A].
  • [RR]: for the progression or student experience measures where the provider participated in the relevant survey (Graduate Outcomes survey or National Student Survey respectively) but has not met response rate threshold required (50 per cent for the National Student Survey, 30 per cent for the Graduate Outcomes survey).
  • [BK]: where the benchmarks are suppressed because there are at least 50 per cent of the provider’s students have unknown information for one or more of the factors used for that benchmark calculation.
  • [DPL]: where data has been suppressed for data protection reasons. The code [DPL] has been used to indicate where the data has been suppressed due to numerator or headcount that is less than or equal to 2.
  • [DPH]: where data has been suppressed for data protection reasons. For the indicators data, the code [DPH] has been used to indicate where data has been suppressed due to a numerator that is greater than 2 but is within 2 of the denominator. For the overall shape and size of provision data, the code [DPH] has been used to indicate where data has been suppressed due to a headcount for a particular category of students being greater than 2 but within 2 of the total number of students who are taught or registered by the provider.
  • [DP]: where data has been suppressed for data protection reasons. The code [DP] has been used where further data protection has taken place for sensitive data.

These dashboards do not include data about named providers. Our January 2022 consultations proposed that we will publish data at provider level in the implementation of our confirmed approaches.

We will publish separate guidance for providers writing a TEF submission. In the meantime, our proposals for interpretation and use of this data in TEF assessment are described within the TEF consultation. We have now published our analyses of responses to the consultations on teaching excellence.

Providers can use our published technical documentation to understand their own student data and conduct a thorough assessment of performance.

Both the green shaded bars (for the indicators) and the blue shaded bars (for the differences between the indicator and benchmark) are constructed using a series of confidence intervals. The confidence intervals we plot as the shaded bars begin at the 75 per cent significance level and increase in 2.5 percentage point increments. The 75 per cent confidence interval in the centre of the bar is shaded darkest and then the limits of the subsequent confidence intervals get incrementally lighter in shading for each increase to the significance level.

When constructing the bars, the upper and lower limits of each increasing confidence interval correspond to the points on the bar where the shading changes. The shading to the left of the observed indicator (or difference from benchmark) is determined by the lower limits of the confidence intervals. The shading to the right is defined by the upper limits of the confidence intervals.

The upper and lower limits for each confidence interval represented by the shaded bars are available in data downloads corresponding to the relevant dashboard (see the 'Get the data' sections on the student outcomes dashboard page and the TEF by provider dashboard page). 

The data downloads provide information in two formats: CSV and Excel workbooks. In both data downloads, each row corresponds to an individual bar after filters have been applied.

In the CSV files:

  • The confidence limits used to construct the green shaded bars for the indicators are named “INDICATOR_LOWERXX” and “INDICATOR_UPPERXX”
  • The confidence limits used to construct the blue shaded bars for the difference between indicator and benchmark are named “DIFFERENCE_LOWERXX” and “DIFFERENCE_UPPERXX”

In the Excel files:

  • The confidence limits used to construct the green shaded bars for the indicators are named “Indicator confidence interval (lower, %) XX” and “Indicator confidence interval (upper, %) XX”
  • The confidence limits used to construct the blue shaded bars for the difference between indicator and benchmark are named “Difference confidence interval (lower, %) XX” and “Difference confidence interval (upper, %) XX”

In all cases, the XX suffix refers to a given confidence interval and can take a value starting at the 75 per cent significance level and increasing in 2.5 percentage point increments thereafter.

Published 20 January 2022
Last updated 26 July 2022
26 July 2022
Link to analyses of responses to the consultations on student outcomes and teaching excellence added. Minor FAQ updates to reflect that the analyses of responses have been published.
31 January 2022
One FAQ added

Describe your experience of using this website

Improve experience feedback
* *

Thank you for your feedback