‘Course evaluations do not work’

Course evaluations say more about the weather than the quality of teaching, as far as statistics instructor Casper Albers is concerned. Yet they are used to determine which instructors should be given a permanent position. It’s time to stop using them.

I give lectures to groups of hundreds of bachelor students at a time. During exams – in the middle of the stress of getting them done within two hours – they also have to fill in an evaluation. That’s useful, since I don’t have time to ask each student for individual feedback.

There are always a few helpful suggestions among the comments each year that I use to improve my courses for the next year. But those tips are often lost in an avalanche of remarks that are utterly useless. My favourite is the student who used the official evaluation form to complain that it was always raining when he had to bike to class (I wasn’t happy about it, either).

Something completely different

This type of course evaluation is a relatively recent invention for measuring how satisfied students are with a course, and that is useful information. But that is not usually how these forms are ultimately used. They are more likely to be used to measure whether a course is any good and whether it is taught by a good instructor. In certain faculties, they are even used to inform hiring decisions: a permanent contract is only available for instructors with positive course evaluations.

A good instructor is someone who is able to achieve learning outcomes for his or her students, not someone who keeps his or her students happy. In order to achieve those goals, the course may have to make students uncomfortable: you can only truly learn if you get out of your comfort zone.

Men vs. women

Complaints about student evaluations are hardly uncommon – especially from instructors with disappointing reviews (mine are usually pretty good, by the way). A considerable number of recent scientific studies have made clear that criticism of course evaluations goes far beyond bitter teachers. First of all, course evaluations have a strong correlation with aspects that have nothing to do with quality of teaching. Course evaluations are structurally more positive if it’s sunny outside – which is a bummer for instructors who are evaluated in January, but a boon for June evaluations.

A second – more troubling – correlation exists between evaluation results and gender: male instructors are consistently better reviewed than female instructors (which could explain why mine are usually pretty good). This was measured in a very interesting study: after the conclusion of a MOOC, the students were told that the person who had helped them out online was named either Alice or Bill – but it was the same person. The online instructor named ‘Bill’ received significantly better evaluations than the online instructor named ‘Alice’…

Quality

Evaluations are dependent on all kinds of things that have no connection to quality of teaching whatsoever. But that is still only half of the story. The other half: evaluations have nothing to do with quality of education. Two meta analyses published this year revealed that the correlation between quality of education and student appreciation is 0.01 (any closer to zero than that never happens).

The explanation for that, according to Kornell & Hausman: ‘[…] that good teachers are those who require their students to exert effort; students dislike it, especially the least able ones, and their evaluations reflect the utility they enjoyed from the course. Overall, our results cast serious doubts on the validity of students’ evaluations of professors as measures of teaching quality or effort.’

Stop using them

In summary: (i) course evaluations do not measure what most people think they measure (such as quality of teaching); (ii) course evaluations measure things they are not supposed to measure (such as weather conditions). Yet they are used to determine which instructors should be given a permanent position; to determine whether a course is ‘good’ or not; which courses should be adapted, etc. The Personnel faction of the University Council disagrees with all of this, which is why we have written a memorandum which we will soon discuss with the Board of Directors. We hope that the board will immediately stop using course evaluations in hiring decisions and that they will create more awareness among educational committees and examination boards about the limitations of course evaluations.

We also hope that the board will join us in thinking of ways to better measure – and thereby improve – quality of education. Here are some suggestions, and it is not yet clear which ones will work and which ones won’t. But one thing we do know for sure is that course evaluations do not work.

Casper Albers, associate professor of psychometrics and statistical techniques at the Heymans Institute and member of the University Council’s Personnel faction

Dutch

Subscribe
Notify of

De spelregels voor reageren: blijf on topic, geen herhalingen, geen URLs, geen haatspraak en beledigingen. / The rules for commenting: stay on topic, don't repeat yourself, no URLs, no hate speech or insults.

guest

0 Reacties
Inline Feedbacks
View all comments