Last month at Memorial, the Senate Committee on Course Evaluation (SCCE) concluded a survey soliciting instructor opinions about the course evaluation questionnaire (CEQ).
My hope is this will launch an overhaul of course evaluation methods at Memorial. For reasons of fairness, workplace wellness and accuracy, the CEQ as it stands should be scrapped and replaced with better measures of teaching effectiveness. Here’s why.
Student bias
CEQs are unfair in that not all instructors are held to the same standards.
CEQs are a better measure of student bias than of course effectiveness. A variety of studies have shown that student evaluations of teaching such as CEQs reflect student bias based on the perceived gender, race, visible minority status, ESL-status, age and beauty of instructors.
In this article, let’s focus on the example of gender bias. One 2016 study by researchers Anne Boring, Kellie Ottoboni, and Philip Stark found that in an online course where an instructor posed as a man for one course section, and a woman for another, students ranked the same instructor lower when they believed it was a woman, even though the instructor performed all tasks identically.
“Women instructors may often have to do more to earn a score equivalent to a man’s.”
Another article by Laube, Massoni, Sprague, Ferber (2007) noted that students expect and receive more time and attention from women instructors, yet are more likely to give them lower rankings for availability.
In short, women instructors may often have to do more to earn a score equivalent to a man’s. When you add additional factors, such as belonging to a racialized minority, overlapping biases are at play. So, the first reason for overhauling the CEQ is about basic fairness.
Disproportionate expectations
The second reason for re-thinking the CEQ relates to health and wellness.
In my own recent research on women professors’ perceptions of gender bias from students, they reported losing sleep over, feeling discouraged about and spending an excessive amount of time dealing with the disproportionate expectations that they face as instructors.
For example, they cited receiving frequent and persistent requests to waive or alter course requirements with little or no justification, and expectations that they will perform clerical work such as printing papers for students.
One woman received the request: “Will you be a sweetie and print this for me?”
The CEQ aggravates this situation by providing a formal mechanism for “trolling”, or anonymous bullying of instructors if they do not live up to these unreasonable expectations.
“Research participants . . . have received comments on their appearance and personality in open-ended questions, such as being “easy on the eyes.”
There is no accountability with CEQs. While many instructors receive productive feedback from students, some respondents may use it as a parting shot against a professor they dislike, or who did not give them the grades that they expected.
Knowing that ratings may affect the reputation or advancement of a disliked instructor, students could “low-ball” even quantitative, seemingly more objective measures (such as starting and ending class on time) than those raised in the Memorial CEQ questions.
This is done anonymously — so there is no incentive to be honest or fair. Research participants have shared with me that they have received comments on their appearance and personality in open-ended questions, such as being “easy on the eyes,” suggesting that the CEQ is far more personal than a mere course evaluation.
So, the second reason for overhauling the CEQ is about wellness and workplace health.
Accuracy?
Finally, as its very title suggests, the CEQ is supposed to serve as an evaluation of the course itself.
However, we do not require our students to attend class at Memorial by apportioning a segment of the final grade explicitly to attendance. It makes no sense to empower those who have not been present in class to evaluate a course.
Those who do not attend the majority of classes lack the requisite familiarity to assess the course. The argument comes full circle if we consider whether, given the biases that have been found in student evaluations, students are the ideal assessors of course quality in the first place.
For now, suffice it to say that the third argument against the current CEQ is about accuracy.
Changing the measure
In 2007, Laube, Massoni, Sprague and Ferber suggested a number of potential correctives to the biased nature of CEQs.
Some of these include university-based training for administrators on how to interpret these problematic measures when they are used to evaluate instructors, and training for instructors on how to build a case for teaching effectiveness that does not include CEQs.
Where instructors are expected to administer these surveys to students, they may also choose to have a discussion with students about how bias can affect their evaluations of the course and the instructor.
At the moment however, it seems that we may be able to change the measure itself. The SCCE’s current review of the CEQ is a moment of opportunity.
Let’s use it to make measures of teaching effectiveness more equitable.