Course evaluations fail to do their jobs

We’re to that point in the semester again. The time where we have to fill out evaluations for classes. In theory, this seems like a great way to get feedback — it’s our chance to grade the professor. Even though these evaluations are given with good intent, the questions we’re required to answer aren’t phrased in a way to which we can offer our best answers.

On these class evaluations we are given two sections in which to voice our feedback. The first section is the infamous “bubble answers” where we rate the course on a scale of 1 to 5. The second section contains short answer feedback. It would be far more beneficial if these evaluations would be more specifically tailored. For example, each type of study should have its own evaluation. By this I mean there should be different types of evaluation sheets for different subjects. Science classes should have their own specific evaluation with questions; English departments should have theirs, etc. This would help students answer questions a lot more specific to the course.

In addition to separate evaluations for different subjects, the evaluations should be shorter. I understand that the University wants our feedback, and as much of it as possible. However, as a student myself I know how tedious it is to sit there and fill out twenty-six questions—questions that we barely know how to answer. I know for me when I’m asked how a course has “enhanced my learning,” I am not sure how to respond. Sure, I’ve learned things in the class but is that what it’s asking me? Or is it asking me if the class has made me a better learner? These questions just aren’t tailored to students.

Another thought that crosses my mind; do the teachers actually read these evaluations? They all tell us that they don’t see them until the summer, but do they actually use the feedback we give? It seems to me that professors use the same PowerPoints and course materials for years. I’m not entirely confident that our feedback from these evaluations is put to use.

The part of the evaluation that I do like is the short answer section. It actually lets students voice their opinions and not pick from a list of “highly satisfied/dissatisfied” or “strongly agree/disagree” questions. The freedom to write what you think about the course is extremely helpful. In my own words, I can say what has helped me and what has been an issue. Students have opinions on their classes; whether they are good or bad opinions, feedback is important. Being able to write just a few short answer questions about the course would probably be better feedback than the evaluations we currently fill out.

I asked my friends what they thought of the evaluations. They asked to remain anonymous, but they did have a lot to say about them. One said “if I like the teacher and class then I just put highly satisfied answers all the way through. Same if I don’t like them. I just put highly dissatisfied for everything.” She said the evaluations are way too long and she thinks they should be shorter. Another friend of mine, who also chose to remain anonymous, said similar things. “I think the evaluations should be more class specific. I don’t understand why we have one generic evaluation for classes that are completely different. It just doesn’t make sense to me.” I don’t think we’re the only ones who think this way.

The last part of the evaluation I wish was different is the last question. It asks if you’re classifying the course as helping with critical thinking, quantitative reasoning, or several other types of learning. The problem is, most students have no idea. They don’t know what qualifies a class to be considered “quantitative reasoning” and so they end up filling in a random bubble. When we filled out these evaluations in my chemistry class a few days ago, my professor said that this is the question that’s most important. It got me thinking: if students aren’t even sure what this question is asking, how can we answer accurately? It would be helpful if there were a description for each type of learning. This way, students would know which type of learning the class they are evaluating falls under.

Evaluations are great, and feedback is important for improvement. I simply have my doubts as to whether the University’s method of evaluation does its job. Different evaluations for different subjects, shorter evaluations with more tailored questions and more explanations on the evaluations are all changes that would help improve the feedback students provide.

Ochs can be reached at [email protected]