Student submitted ratings poor indicators of teaching effectiveness
Recent attention to issues involving student ratings of faculty, such as CMU’s national ranking based on ratemyprofessor.com and whether students should have easy access to SOS scores, focuses attention on the validity of such ratings when used as indicators of teaching effectiveness.
At the crux of the issue is whether evidence supports the inference that college students learn more in classes where faculty members receive higher ratings. Unfortunately, research suggests they do not.
In fact, students appear to learn less because these instructors require less work and grade more leniently. Although it may seem counterintuitive, high student ratings could mean less effective teaching, at least as far as learning outcomes go.
The Department of Mathematics at Texas A&M conducted an “experiment” on the use of such ratings in 1994, including common items such as, “The instructor seemed to be well-prepared for class” and, “I believe the instructor was an effective teacher”.
The department confirmed the well-known finding that students in sections where the instructor got higher ratings also received higher grades.
However, the department tracked students in later math classes and, rather than finding a positive correlation between student ratings and grades in the subsequent courses, there was a negative correlation.
Perhaps more telling, the negative relations became stronger in the second or third course down the line. Thus, students appear to learn less in courses where the instructor received high ratings because they required less work and graded more leniently.
This is consistent with robust findings in educational research that has shown students learn less when teachers have low expectations and do not maintain high standards. Based on their research, the math department at Texas A&M abandoned collecting student ratings out of fear that standards would deteriorate.
Their concerns were justified based on events I have observed at CMU: A new faculty member who comes in with high standards and requires students to work inevitably gets low ratings and is warned by his or her department and/or the administration of negative consequences.
Improving student ratings is not rocket science; give fewer assignments and grade exams more leniently. And this is exactly what happens.
Thus, overreliance on student ratings for promotion and tenure decisions leads to reduced academic standards and providing easy access to these ratings will only increase pressure to make classes less difficult.
Students will likely suffer because they will learn less and CMU will continue to reward faculty members who lower standards to make their courses easy.
The results of the research are somewhat reassuring when considered along with the published ranking based on the website ratings, as it suggests that CMU professors may not be among the worst in the country; we may just the among the more demanding and challenging.
Neil D. Christiansen
Professor of psychology