Friday, March 15, 2013

The Unnecessary Agony of Student Evaluations. From the Crampicle.

Hey there, cats and kittens, Heywood from Henderson comin’ atcha with some student evaluation misery!

They suck. We know they suck. The students know they suck FOR US. The admins continue to fellate the student evaluation philosophy with the fervor of a five-dollar whore the day before the rent’s due. This is preaching to the choir, but everybody needs a little gospel to keep the faith alive.
Student evaluations can be either the most painful or falsely ego-boosting things we faculty members read. Sadly, they’re becoming more and more important as American universities veer toward private-enterprise models of educational management. Based on the concept of the customer survey, they have been taken public by a range of Web sites, most famously Rate My Professors.


  1. As long as evaluations are just a mechanism for students to vent their frustrations, that's fine; everybody needs that (as evidenced by this blog). But administrators in the US discovered a while back they are the perfect tool to exercise power over the faculty (to which they no longer can be said to belong). So they're used in performance reviews.

    You think tenured faculty are safe? Let me tell you where this process is: somewhere in the country, the administrators of a so-called R1 are testing the theory of whether a faculty member with good research productivity can have his tenure challenged based solely on student evaluations (that is, peer reviews of teaching and grade distributions are normal as well.) They have the mechanism in place to do this in opposition to various faculty committees that have looked into the matter. But they've made mistakes, so it's not easy.

    In Europe it's more or less as the article says: the onus in on students, so they work like crazy, learn to be self-motivated in high school, and generally choose the post-secondary track they are interested in, not necessarily academic. So they learn at levels that would be unthinkable in the US at a comparable stage. I know what I'm talking about, since a close relative is a university student in Europe (STEM field).

    Now, to try to predict where things will go in the US, the overarching principle is: capitalist priorities prevail. There is already grumbling from industry that U graduates are not sufficiently "trained" to take up a job on day 1, and "industry" knows (due to oversupply of workers) they don't need to spend a penny on this problem. There is a growing level of "systemic dishonesty" (people holding degrees without commanding the knowledge normally associated with it.) The system will have to give at some point. Somebody (a student? A company?) will feel cheated, and universities will be sued.

    RMP is a different kind of animal: an internet rating site which does not verify the identity of its raters (or whether they are entitled to rate a "service provider"). Such sites should have no credibility, but in practice they do damage people's professional activity (not just college profs.) This is an evolving area of "internet law", and when businesses are sufficiently inconvenienced, the law will follow, and RMP and its ilk be treated like internet porn.

    1. You bring up some good points but I don't think admins at an R1 would care about teaching as long as the research money flows it. Low teaching evals would be an excuse to hire an instructor or adjunct, not get rid of a revenue stream, I mean, researcher.

    2. True, researchers who aren't "revenue streams" are easier to harass (compared to other fields, the overhead in math grants is negligible.) But the main point is that people who run labs (in my observation) don't teach gen-ed lower-div courses (or do so once in a blue moon); they teach graduate courses, or journal clubs. Required courses for the general student population are the killer.

  2. I am unhappy
    with the amount of commentary.
    Such is such.

    I also can't say
    with any certainty,
    if it is too little
    or too much.

    1. Point taken. I'll try to save the fun-less rants for my own blog in the future. (It's just that this topic is irresistible.)

    2. I don't think that Dick Tingle was criticizing you, Peter K. I interpret his poem as an ironic meta-poem about the fact that students complain about a class (i.e. comments), but don't have a basis for their complaints (i.e. can't say how much is too much or too little).

    3. Indeed.

      I was not replying to
      Peter K,
      not one whit.

      And if I am lying,
      may you fill my
      mouth with shit.


      I was cracking wise
      (in my usual way),

      about too much or too little context,
      on these damn linked articles...

      Who is really to say?

  3. "The problem is when we mistake our money for power, as if buying a service gives us control over its manufacture or production. It doesn’t. “Consumer power” is a myth invented to get us to buy more."

    I think his logic is off. He doesn't seem to know a great deal about business (I do have a BBA in Management, for what it's worth.) Consumer power is like the old carrot and stick metaphor; if the carrot doesn't work, use the stick. In this instance, our money is the carrot. If the business in question wants the carrot it will do what is necessary to get it. If the service or product falls short they don't get the carrot, and the stick part of the equation is that it goes to someone else.

    Using consumerist models to run universities is one of the most idiotic ideas to ever be used. I wonder what would happen if a gymflake signed up to train for the Mr. Olympia contest and either didn't come to the gym or sat on a bench posting on Facebook and never touched a weight. After they found out they didn't even qualify to enter would they have their parent helicopter in and threaten to sue?

  4. I respectfully submit this image as the logo to be used for discussion of course evaluations:

    1. I've never understood why the forms have over twenty questions, when a single one would suffice:

      Instructor overall (circle one) LIKE / DON'T LIKE

  5. I once heard a high level administrator say "We put way too much emphasis on course evaluations. We need better ways to assess courses and also give teachers formative feedback they can use to improve. Course evaluations are almost entirely meaningless."

    I looked under the table for a pod.

  6. Why don't the high level administrators DO anything about evil-uations?

  7. My department has actually moved to a rubric in which evaluations count for a relatively modest proportion of the total available points (20-25%, depending on whether one had a class visit by another professor that year). The rest of the points are determined by faculty reviewing each others' course materials (syllabi, assignments, graded papers, etc.) and by the report from that class visit (if applicable). It's not a perfect system (only the TT faculty can do the reviews, which is frustrating for those of us who are experienced non-TT faculty, and teach quite different subjects and loads), but, as someone who rarely receives the highest scores on student evals (though my averages -- and another good thing is that the rubric uses averages, so people teaching 8-10 sections a year can't be hurt badly by one really unhappy class, a phenomenon I've experienced twice in c. 100 sections -- are comfortably in the fat part of the curve), I find that my stress level has gone down considerably since this system (which also has the value of transparency, though of course the raters don't score the same things exactly the same way from year to year) has been in place.

    As far as the Site That Shall Not Be Named goes, I've taken to adding a review or two of my own (and a rating that makes me out to be organized/clear but hard) once or twice a year. I try to also offer some good advice to students who might take the class (e.g. "read the assignments and you'll be fine," "just don't fall behind," etc.). I'm not 100% comfortable doing this (I'd far rather just ignore the whole thing), but I may need to look for another job some day, and, for all that potential employers really shouldn't be relying on information they find on the site, I suspect some do.

    I enjoyed the article, though I wasn't really surprised by any of it. The Europe/U.S. comparison is interesting. There's yet another reason for restoring state funding of higher ed: we might have a fighting chance of restoring standards as well (or maybe not; we'd probably just hear "my tax dollars pay your salary!" instead).

    1. Which reminds me: for the first time, I actually got an explicitly consumerist critique on a student evaluation last year (for a very fast-paced summer course). Lightly paraphrased, it read something along the lines of "she expected too much for a 5-week summer course; she ought to keep in mind that we pay her salary." This, mind you, after I'd sent out an email several weeks in advance of the start of the class warning that it would be very fast-paced, spelling out the time commitment required, and suggesting that students look into a taking the course in a longer term if they didn't have the time available.

    2. This seems like a much better model for evaluating teaching since it looks at your teaching abilities from a number of different perspectives, not just the disgruntled students who didn't want to have to do any work.

    3. My department has no formal criteria for the evaluation of teaching; dept heads write whatever they want. A system such as you describe would be a vast improvement.

      One problem with "that site" is that profs' pages are "moderated" by local people. In my case, much of the "moderation" has consisted of flagging/deleting positive reviews (they disappear). So even writing my own (from a proxy server) wouldn't help that much.

      Students don't pay my salary, the state government does. A public U slot is a taxpayer-subsidized scarce resource, so please demonstrate you're a good investment for the State. That's the philosophy in countries where higher ed is essentially free, and the same would be true here if state and federal support were where they should be.

    4. As far as the Site That Shall Not Be Named goes, I've taken to adding a review or two of my own (and a rating that makes me out to be organized/clear but hard) once or twice a year. I try to also offer some good advice to students who might take the class (e.g. "read the assignments and you'll be fine," "just don't fall behind," etc.). I'm not 100% comfortable doing this (I'd far rather just ignore the whole thing), but I may need to look for another job some day, and, for all that potential employers really shouldn't be relying on information they find on the site, I suspect some do.

      I also do "RMP maintenance." I hate it, and I feel like I need to shower afterwards, but I feel like I really have to keep on top of it. For me, it's not just because of potential hiring issues or the fact that it's the first thing that pops up when you google my name, but because I think it has a direct influence on how students behave toward your class. Even if your potential employers haven't read your page, your students certainly have, and they enter the classroom on Day 1 with a pretty set perception of who you are and what you expect.

      Two years ago I had a bad class that resulted in a run of negative reviews. Each new review that appeared seemed to affirm the last--I was a "bias and harsh" grader, the class sucked, I couldn't generate interest, the subject-matter was stupid and pointless, etc. My institutional qualitative evaluations gave a somewhat different and fuller perspective--it was clear that my RMP reviews were being left by a vocal and disgruntled minority (duh)--but I began to suffer consequences. My class enrollment dropped. My students' institutional evaluations seemed to recycle the language that was posted online. Worse, a bunch of students failed my midterm--somewhat interesting, considering the fact that my RMP reviews mischaracterized the midterm as "easy."

      So I started submitting reviews, and things turned around somewhat. I'm always careful not to be too effusive--and I always stress how "tough but fair" I am--but I do think it's made a difference in how my students approach my class.

  8. A different, but related, problem for me -- I'm a graduate TA, and I teach in a department which only has online evaluations. Do I need teaching evaluations to have a shot at teaching even one sessional course, let alone at landing a TT position? Of course I do. But do more than one or two (at best!) out of 50 students ever take the trouble to click the email link and fill out the evaluation? Of course they don't. I never know whether a given year of dedicated tutorial teaching is going to result in *any* official evaluations for my dossier. :-S

    1. How about this: ask your department's teaching coordinator (or whoever is going to write you a "teaching letter" when you apply for jobs) to visit your class one day, and on that occasion give the students an old-style paper evaluation form to fill out (preferably not multiple choice, but allowing comments), explaining the dept follows this procedure for grad students. In fact, you could get a bunch of grad students together to make the request to your dept's assoc. head for graduate studies. I can't imagine they wouldn't agree to do that.

    2. I get better responses (i.e., more of them) if I offer extra credit points. Students have the option, when filling out their evals, to send a confirmation notice to the instructor (no information is included, it notes the student completed their evaluation). Offer them a few points for doing this. It won't really impact their grade, and if they are borderline and it does bump them up, then so be it. And this is assuming your department allows this (ours leaves it up to the instructor).

      Or... you could dedicate class time for student to fill out evals. If your class is not in a computer lab, make arrangements to take them to a lab, and sit them don to do this. 10 minutes is usually adequate.

      You could also ask a student or two to write a letter for your teaching portfolio.

    3. I don't know if this is of any comfort, but a lot of people are in the same position. I taught 80 students last semester and got 16 evaluations. I think a lot of people will soon be hitting the job market with scanty evaluation data. I imagine that search committees will be forced to then turn their attention to other things--teaching philosophy, letters, teaching demonstration, etc.

      Look at it this way--sometimes the absence of data is better than really bad data.

    4. I gave extra credit last quarter if my percentages reached certain levels: An A to everyone on an assignment if I reached 95%. In all my classes, I had above 95%. How much was the extra credit worth, you might wonder? 0.02% of their overall grade. It bumped one person up from a C- to a C.