Skip to content
The UK's only independent think tank devoted to higher education.

The true potential of a national student survey

  • 19 April 2021
  • By Johnny Rich

This blog was kindly contributed by Johnny Rich, Chief Executive of the Engineering Professors’ Council, Chief Executive of outreach organisation Push and a consultant on higher education. He is writing in a personal capacity. You can find Johnny on Twitter @JohnnySRich or reach him via his website at www.johnnyrich.com.

Last year, the Department for Education asserted that the National Student Survey ‘exerts a downward pressure on standards’ in higher education by inciting ‘dumbing down and spoon-feeding students’. No evidence was offered to support the claim, but the Office for Students (OfS) was told to run along like a good little regulator and review the NSS all the same.

The OfS has now responded with the Phase One report of its review, which found no ‘evidence of a systemic issue of grade inflation or a lowering of standards’. On the contrary, its consultation showed wide support for NSS’s role in helping higher education institutions to enhance quality. This verdict may prove sufficient for OfS to avoid being in the invidious position of using its role as the supposed champion of students to axe the only significant official mass collection of student opinion.

However, once the review train starts rolling, it takes more than the undermining of its entire premise to stop it. And so, the NSS will be reformed, whether it needs it or not. I happen to believe it does – but not for the reasons stated by the Government – and even timid change presents bold opportunities.

Among the proposed reforms is to drop the word ‘satisfaction’ from the Question 27, the survey’s killer crunch question, which asks respondents to say whether they agree with the statement ‘Overall, I am satisfied with the quality of my course’.

This has always been the NSS’s fatal flaw. Satisfaction is problematic: it is not a measure of quality, but rather a measure of the distance between expectation and delivery. It involves a relationship between the student and their institution. There are two variables involved and measuring the line between them tells you nothing without knowing either the start or end point.

The same problem applies to many metrics in higher education: outcomes, teaching quality, admissions are just a few. The data involves a function not a single collectible value. However, if we acknowledge that measuring quality is about relationships not absolutes, that can help open metric opportunities.

Graham Gibbs’ two seminal papers on Dimensions of Quality for the Higher Education Academy honed in on this and made clear that a key precursor to effective learning is engagement.

The NSS was tweaked a few years ago to acknowledge this, including a question about whether students ‘feel part of a community of staff and students’. It is hardly a probing investigation and – if we are to have reform now – we should grasp the opportunity to introduce a more thorough student engagement survey.  

This is not a radical idea. Many countries – including the US, Canada and Australia – run national student engagement surveys. The UK already has its own, but participation is an optional benefit of AdvanceHE membership and the findings are not generally made public.

As well as focusing on engagement, the survey should – as OfS has mooted – be extended to all students, rather than just finalists. There are two good reasons. Firstly, by their final year, it is too late to help those students whose response might be highlighting problems. And, secondly, there are potential biases, such as the sunk cost fallacy pulling in one direction and negativity bias (axe-grinding, in effect) pulling in the other.

By extending to all years, we can also track individuals over time and not only learn about their institution’s performance, but also their personal development. In effect, we are taking multiple readings, because change – like satisfaction – is a function that you cannot measure with one datum.

Having the ability to measure student engagement and change over time will give more purpose to NSS. The original intent was to provide useful information for prospective students. As OfS has confirmed, the NSS has never done that strongly. In fact, of the small proportion of applicants who pay it any attention whatsoever, most use it as a post hoc rationalisation for choices they have already made.

It was after NSS had been around for a year or two that it became clear that it was more useful as a means to try to drive quality enhancement and soon that became its primary purpose. That was not an unhappy outcome: as a way of raising standards, it was just as effective as driving competition through better informed consumer choice, certainly given the failed attempt to achieve the latter.

However even then, NSS suffered from the limitations of the student respondents’ ability to make meaningful comparisons. I can answer a survey about a restaurant because I’ve eaten in a few over the years. However, I’m not so able to answer a survey on sewage disposal, because I’ve no points of comparison nor expertise. Indeed, I’m only likely to bother to answer the survey if I have a complaint. The same applies to most students experiencing higher education for the first time.

Again, understanding engagement would be far more useful to higher education institutions for enhancement than satisfaction data because it encourages them to change the very thing that might actually have an impact. With satisfaction, there is always the perceived pull either to lower expectations or pander to superficial ones. As these are precisely the unproven concerns that DfE has about the NSS, the OfS is right to want to remove satisfaction from the equation, but it would be counterproductive simply to diminish the opportunity for the student voice to be heard or to replace it with even poorer proxies for quality.

As it happens, engagement data would also be more useful than satisfaction data in informing prospective students. Well, maybe not the data itself, but rather the collection of an engagement survey would encourage higher education institutions to reflect on how they engage and that might help them be able to articulate better exactly how they shape the education they offer as a joint endeavour rather than allowing it to be seen as a service they deliver (or worse, an activity they do to students). That would inform students about the learning experience they could genuinely expect to receive and how it differs from other higher education institutions or courses.


Get our updates via email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

 

2 comments

  1. Anonymous says:

    Johnny, how would you survey engagement effectively, with proxy metrics?

  2. John Baker says:

    Nice article Johnny, and I agree with you about engagement being a factor worthy of superior consideration, but I can’t help thinking that the problem with a survey mechanism is that a question about this is just as problematic as the one about satisfaction. Particularly when there is so much data (admittedly held by institutions, and yes – with some gamification challenges) that could provide a more reliable indicator of engagement – access #s of books taken out of the library / journals accessed online – volumes of interaction in blended delivery forums – & once everyone is allowed back on campus – facility access rates, etc etc. I similarly breathe out a sigh every time the staff engagement survey arrives (Should we delay it until the weather has improved…) as institutions sit on so much data that I think can be a more reliable indicator of genuine engagement, # of appraisals completed within set timeframes, absence stats, participation in centre directed activity, rather than the occasional collection of views which can sometimes seem to provide an expungement opportunity for accumulated gripes.

Leave a Reply

Your email address will not be published. Required fields are marked *