Skip to content
The UK's only independent think tank devoted to higher education.

The NSS: Unfit for Purpose

  • 8 July 2019
  • By Richard Budd

This is a guest blog from Dr Richard Budd, from the Centre for Higher Education Research and Evaluation (CHERE) at Lancaster University.

The National Student Survey (NSS) has supposedly tracked the quality of undergraduate provision across the UK since 2005. It is apposite that universities are encouraged to pay attention to their teaching – the introduction of the NSS seems to assume that they otherwise wouldn’t – but it probably doesn’t do this. This is because it is underpinned by several flawed assumptions that render it highly problematic at best, and meaningless at worst.

The first and fundamental assumption is that NSS scores reflect teaching quality. The OfS describes it as gathering ‘students’ opinions on the quality of their courses’, which is obviously slightly different: it is a proxy measure. This is not to say that students’ perspectives are not valuable – they are – but pedagogy takes many forms, and a session or course may work well one year and not the next, or well for some groups and not for others. The processes of pedagogy are complex, and as educators we continuously adapt our classroom practice accordingly.

The survey addresses areas such as support, teaching, and feedback, but one-sidedly, focusing entirely on the university’s provision. The central role – that of the student – is curiously absent. Access to staff, information on assessment procedures, and ‘opportunities to explore ideas or concepts in depth’ will have been present, but students may not have availed themselves of them for various reasons. We can sometimes be more explicit – often more inclusive – but students are not passive recipients of an education and they know this. Capturing the extent to which students have engaged is difficult (it is invisible to systemic monitoring), and asking them to self-report would be a nonsense; in combination with the ongoing grade inflation furore, the media would have a field day.

The relationship between the NSS and teaching quality is therefore tenuous, and is made more so through the next assumption, which is that responses reflect three or more years of pedagogy. Sabri, (2013) explored the realities of the NSS at institutional level, and reports that some students not only found the questions infuriatingly simplistic, but also cited difficulties in thinking beyond their teaching at the point the survey is administered – the spring of their final year. Matters are further complicated when students are taking a degree in more than one discipline – which department do you report on? Sabri also notes that the focus for students at this point is overwhelmingly on their final assignments and life beyond graduation. (The timing is tricky for universities, too, as it comes when interim grades are being released; this can encourage universities to rush results out to please students, placing a further burden on already overworked staff.) IPSOS MORI suggest the NSS should take about ten minutes to complete, but if your main aim is to get it out of the way to stop the incessant messages exhorting you to complete it, 27 Likert-scale responses can take far less. It is supposed to be voluntary, but I doubt if it feels that way, and universities are under major pressure to ensure that at least 50% of every course completes it.

A further misleading premise is that the data is comparable between and within universities, and over time. This would require the NSS be immune to changing social, political and economic conditions, as well as generically applicable across the full spectrum of teaching provision. There is some evidence that disciplines mediate how students approach the survey questions, and that this works against courses with fewer classroom hours and more discursive teaching cultures. Research also suggests that elite universities admit students with higher grades in part because those students will do well with relatively limited support, freeing up more time for research. On the flipside, the less prestigious post-92 institutions, who are less academically – and therefore socially – selective, may be more supportive of their students.

NSS scores are far more valuable to post-92s, who cannot compete in research status terms, so they are likely to view it differently. Some universities expend a huge amount of energy in relation to it, but constantly seeking feedback in the interests of delivery optimisation (and subsequent NSS results) can be counterproductive if it encourages students to be excessively critical of everything. It is undocumented but widely known that universities game the NSS relentlessly – rewards for completion, social events to improve morale, etc – as happy students are more likely to report positively. The NSS Good Practice Guide forbids ‘inappropriate influence’ such as discussions with students as to the nature of the questions or the implications of the results. This is infantilising: students know what the NSS is, they can see the issues with it, and they must sense the institutional nervousness around it. An open discussion with them as to how it all works (or doesn’t) would surely be far healthier.

Another implicit, incorrect assumption is that the NSS strongly informs student choice. This is based on the false notion that all potential applicants are equally well-informed, entirely rational in their decision-making, and that they place great value on NSS scores. Research from 2015 shows that the NSS has a far weaker influence on choice than university status – a 10 per cent rise in NSS scores (which would be very large) only creates a 2.5 per cent increase in applications. The results do, though, feed into a number of rankings and the TEF, which, of course, also doesn’t certify teaching quality. League tables are widely known to be specious, and feeding bad data into a poor model is not the answer: two wrongs don’t make a right.

These issues in combination mean that the NSS would struggle to achieve a pass grade in Social Science Research Methods 101. The same could be said of most metrics which claim to represent the state of UK HE. It is perverse that, for a sector which revolves above all around the production, verification, curation, and dissemination of high quality knowledge, we are partly governed with incredibly poor data. We also demean our mission when we market ourselves through it.

Leave a Reply

Your email address will not be published. Required fields are marked *