Growing evidence of anti-female bias in student surveys

Dutch researchers find female academics 11 percentage points less likely to hit promotion threshold in course evaluations

August 14, 2016
Women in small car looking at man in larger car
Source: Getty
Slighted: student evaluations reveal evidence of ‘gender bias against female teachers’ and ‘do not exclusively evaluate the quality of a course’

A new study provides further evidence that students rate female lecturers more harshly than male academics in course evaluations.

Researchers examined five years’ worth of evaluations from Erasmus University Rotterdam’s International Institute of Social Studies and found that female lecturers were 11 percentage points less likely to receive an average score of at least four out of five from their students.

The study raises questions about the reliability of questionnaires such as the UK’s National Student Survey, which, as part of the teaching excellence framework (TEF)will play a key role in determining the tuition fees that English universities will be allowed to charge.

There is growing evidence that gender bias is a problem in student surveys. The Erasmus University paper follows a 2014 study by Anne Boring, a postdoctoral researcher at Sciences Po in Paris, which found that male students at one university were 30 per cent more likely to rate male teachers as excellent than they were female lecturers.

ADVERTISEMENT

The Dutch paper also adds to doubts about the use of such ratings in hiring and promotion decisions. For Erasmus staff, a course rating of four or higher is vital because lecturers will be considered for promotion to assistant professor only if they have passed this threshold.

Researchers Natascha Wagner, Matthias Rieger and Katherine Voorvelt compared the evaluations of academics teaching 272 modules on a social studies master’s programme for their study, which has been published in the Economics of Education Review.

ADVERTISEMENT

Once course-specific effects were controlled for, female academics received average scores that were 0.12 point lower than men’s on a five-point scale. While this sounds like a small difference, ratings were clustered very tightly around the overall average of 4.27, and gender was found to account for more than a quarter (27.6 per cent) of the variation in ratings.

Dr Wagner, an assistant professor in development economics, said that the results revealed evidence of “gender bias against female teachers” and confirmed that student evaluations “do not exclusively evaluate the quality of a course”.

She argued that student evaluations should not form part of hiring and promotion decisions because such a move “may put female lecturers at a disadvantage”.

“Employing student evaluations as a measure for teaching quality might be highly misleading,” Dr Wagner added.

Although previous studies have found evidence of bias against ethnic minority lecturers in student surveys, the Erasmus researchers found that any such effects were not statistically significant.

chris.havergal@tesglobal.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (2)

The results of the Wagner et al. study confirm previous findings (Centra, 2009; Centra & Gaubatz, 2000; Feldman, 1993) that the effect of instructor gender on student ratings of instruction is small and should most likely not affect personnel decisions, as long as ratings are not the only measure of teacher effectiveness. We agree with Wagner et al. that “Cut-off points for excellence in teaching…are arbitrary and need to be complemented with qualitative feedback in order to get a holistic picture about teacher performance in class” (p. 92). We are troubled, however, that virtually no information is provided in the article about the survey used to collect ratings, other than it “features questions about the course in general and one question about each specific teacher” (p. 83). Moreover, no evidence is presented to support the instrument’s validity and reliability. In fact, the measure of teacher effectiveness is based on a single item. In order to get a complete picture of instruction, we must continue to insist that students’ voices be heard - we owe them the opportunity to provide input about their learning experiences. That feedback is valuable to the instructor as it can help them improve their teaching, and it is valuable to the institution as it provides another set of data that can be used to help evaluate, support and grow its faculty.
It also does not mentions if the students surveyed were mostly males? There is a possibility that the majority of the students are females.

Sponsored

ADVERTISEMENT