Today we have published our latest analysis of degree attainment in English higher education. Our report, which includes new data from 2020-21, looks at changes over time in the proportions of students receiving a first or 2:1 degree from their university or college. These proportions have increased significantly over the past decade. They have become a focus for public and sector concern, with regular media coverage of ‘grade inflation’.
Since 2010-11, rates of firsts awarded nationally have more than doubled from 15.7 per cent to 37.9 per cent in 2020-21. Our report focuses on ‘unexplained attainment’: the extent to which the increase in awards of first and 2:1 degrees can – or cannot – be explained by observable factors which may affect attainment, such as changes in the make-up of the graduate population.
Our analysis shows that the sector level unexplained attainment (the percentage that cannot be accounted for by graduate population changes since 2010-11) for first class awards did not rise between 2019-20 and 2020-21 and for first and second class awards declined slightly. However, we remain concerned that the proportion of unexplained attainment for first class degrees in 2020-21 was still 22.4 per cent. This means that according to our analysis none of the sector increases in first class attainment since 2010-11 can be accounted for simply by observable changes in the graduate population. The possibility remains that there could be further increases in the rates of firsts and levels of unexplained attainment in years to come.
Why does this matter?
The credibility, value and reliability of the qualifications awarded by higher education providers is crucially important for students, their families, employers and other stakeholders. It matters, too, for the international reputation of the English higher education sector, and to taxpayers, who contribute significantly to funding higher education.
Unexplained grade inflation risks undermining public confidence in higher education and the hard work of students. Degrees must stand the test of time – it’s vital that public confidence in them is maintained and enhanced.
A strengthened regulatory approach
As the regulator for higher education in England, the OfS has an important role to play in ensuring confidence in the value of qualifications is protected. We recently published a revised ongoing condition of registration relating to degree assessment and awards – condition B4 – as part of our strengthened approach to quality and standards. Universities and colleges registered with the OfS need to satisfy this condition, which came into effect on 1 May 2022.
Condition B4 requires universities and colleges to assess students effectively, and to award qualifications that are credible compared to those granted previously, and that are based on the knowledge and skills of students. The same level of student achievement should not be rewarded with higher degree classifications over time.
The condition sets out a series of minimum requirements that, taken together, will ensure the value of qualifications now and in the future is protected from factors that are not linked to actual student achievement. The full requirements are set out in the 1 May amendments to our regulatory framework.
Reliability
Condition B4 includes a requirement that each assessment a student takes is ‘reliable’. ‘Reliable’ in this context means that:
‘an assessment, in practice, requires students to demonstrate knowledge and skills in a manner which is consistent as between the students registered on a higher education course and over time, as appropriate in the context of developments in the content and delivery of the higher education course’.
If assessment is not reliable, that would suggest factors other than a student’s actual achievement were affecting the outcomes of the assessment, either within a cohort of students, or over time between different cohorts.
Credibility
Condition B4 also requires a provider’s academic regulations – the rules individual providers put in place to ensure rigour and consistency in their academic assessments and awards – to be designed to ensure that relevant awards are ‘credible’ at the point of being granted and when compared to those granted previously.
In reaching a view about the credibility of an award, the OfS may consider ‘any actions the provider has taken that would result in an increased number of relevant awards’. For example, it would concern us if a provider had changed its academic regulations in a way that had resulted in an increased number of first class degree awards that could not be linked to a measurable increase in actual student achievement. If a provider’s awards do not meet our requirements relating to credibility, its actions might be undermining the hard work and genuine achievement of students.
Investigating unexplained attainment
We know that increases in unexplained attainment or in the number of first-class degrees could be due to a range of positive factors that would not be likely to affect compliance with OfS conditions of registration. When we look at the data for individual providers we will of course also take account of the recent pandemic. However, the pattern of increase – in particular, in the level of unexplained attainment for first class degrees – over a sustained period at sector level since 2010-11 continues to give us concern. We want to ensure that, if degree outcomes have become artificially inflated, this does not become ‘baked in’ to the system.
We remain willing to intervene to protect the value and credibility of awards. Where we identify increases in unexplained attainment for an individual provider, or other information that gives rise to concern about the value of qualifications, we expect to investigate further. We will publish more about our plans for these investigations over the next few weeks.
Comments
Between 1988 and 2012 I coordinated a university first year module. During a few years, 1988-1996, the number of students doubled and the average mark fell from 54% to below 30%. To remain viable we had to raise the marks of students getting 25 up to 35 so that their coursework would give them an average pass of 40%. We did this by removing difficult (especially numerical) components, teaching more to the questions than the subject, and giving 20% of the marks for rote learning; filling in missing words in "key" sentences which were given to the students to accompany each lecture, and identified as such. There was a tendency to simplify questions and expectations of answers too. This got the weak students through with a moderately good A-level knowledge, but the first class students who would have got 75 now got marks in the 80 or 90s. We were commended for our achievement. The data is published:- Journal of Biological Education Volume 34, 1999 pages 32-35 https://doi.org/10.1080/00219266.1999.9655680 Similar modifications were made for final exams. When I was an UG, we had two papers of "problem questions". These were unseen original research data which we were expected to make sensible comments about, using the overall knowledge/understanding gained on the course. By 2000, most universities had dropped these because most students could not attempt them. Removing these papers put average marks up. The students had come through an education system where they were taught what to put in answers to standard questions. e.g. reproduce the notes for the appropriate lecture. The technique for a 3-question paper became to prepare and learn two formal answers, put them down for the most appropriate questions, and hope to muddle through a third answer to get a 2.1 average. When I started marking honours courses in 1982, we marked to degree class, essentially in 10-mark bands. Below 40 was inadequate, 40-50=3rd, 50-60 = lower 2nd, 60-70= upper second, and 70+ = 1st. The maximum almost never exceeded 75. A textbook might have got 80. From 2000 -2010, there was a noticable amelioration in the attitudes of external examiners, and calls from some to use the whole 100-point marking scale. This Changed the first class band into a 30-point range. Previously, and average of a brilliant and a fail mark was (39+76)/2 = 57.5, high lower second, showing limited areas of good knowledge. With a 76 suddenly scaled up to 95, that average became 67, top 2.1, eligible for elevation to first by various "predominance" rules. One bullseye pre-learned answer could almost guarantee a 2.1 or 1st for that paper. There was also a tendency for a middle-of-the-road mark to go from a central position of 58 or 60#, 2.2-2.1 border, towards 70, borderline first, halfway between pass (40) and top (100). (# 59s were banned). So expanding the top band automatically raised the markers' conception of appropriate marks. Doing all calculations numerically as rules to be obeyed, the examiners meeting became powerless and irrelevant. There was no way to assess the appropriateness of any student's degree class. Standards were determined by central university scales and requirements. Lecturers, as tutors and markers, became detached from involvement with final degree classification. Internal audit and censure soon persuaded anyone maintaining standards that they should join the race to the top. Attendance at assessment events became patchy and cursory. By 2010, graduates from the easier 1999 standards were 30+ years old and actively lecturing. As the staff who remembered the 1980s (let alone the 60s and 70s) retired, the way was open for rounds of reinforcement of the decline as younger staff, familiar with lowered standards, continued the process and gained from more lenient marking by better student assessment and congratulations from senior staff for improving their pass rate. Hugh Fletcher, one time departmental director of education.
7 Oct 2022 - 2:32PM
Universities are commercial entities and there is a financial incentive for them to inflate grades to increase their customer appeal. I would be very interested in seeing a further breakdown of this data by subject and university. For instance, is there a statistically signification difference in grade inflation between Russell Group universities (that arguably don't have as much an incentive to inflate) and others? Also, is there a significant difference between those subjects which are historically oversubscribed and those than are not?
12 May 2022 - 11:33AM
Report this comment
Are you sure you wish to report this comment?