Tuesday, November 20, 2012

Quit Bitching. You Can Still Post New Things. Another Thanksgiving Week Lazy-Ass Reposting. But At Least This one is From This Damn Blog. 2 Years Ago on College Misery, When We Still Had Hope.


SATURDAY, NOVEMBER 20, 2010

10 Ways to Boost your Student Evaluations

I notice evaluations are on people's minds again, and so I thought I would provide some helpful tips. You are, of course, invited to contribute your own ideas in the comments, dear Reader.

Let's be honest, we all know the best and easiest way to improve student evaluations: give everyone an A. But some of us have standards we are unwilling to compromise, and so we want to improve our evaluations without throwing out the baby of academic integrity with the bathwater of pandering to snowflakes.

I know, many of you think this is impossible, but I am here to tell you that in the past 10 years, I have discovered that it is indeed possible to - well, manipulate is a harsh word, so let's say boost, or maybe improve - improve student evaluations without sacrificing teaching standards.

"How can this be?" I hear you cry. Let me lay it on you.
  1. Chocolate. This is a scientific fact.
  2. Be more physically attractive. Also science. Okay, I realize this one may be difficult for some of you, but you can probably take the edge off the fugly. Comb your hair, trim your beard (especially important for female profs), buy some clothes manufactured after 1979; you get the picture. If you can get a chili pepper on you know where, you are golden.
  3. Never, ever, ever, ever lose your shit. Don't yell at the whole class about anything, no matter how annoyed you are at them. That guy from Florida who is on video yelling at his class for cheating? ALL his students hate that guy.
  4. Praise the whole class generously. (You may need anti-nausea meds, but do what it takes.) Even if they all suck and you want to yell at them, say stuff like "I was really pleased with the level of writing in the vast majority of your assignments". This means that the students who sucked think it is about THEM, not about you being mean and hating everyone.
  5. Never give work back immediately before evaluation day. Unless you gave everyone an A, but if you did, you don't need this advice. If you are late giving back work and really have to give it back on evaluation day, make them come to your office after they do the evaluation. Say "I marked them, but I have to put your grades in my gradebook" or something.
  6. Making students come to your office makes them see you as a human, especially if you have pictures of your kids (if cute) or your pets on your wall. If you don't have kids or pets, stick up some random pictures of cute kids.
  7. Don't overshare. Students don't want to know about your ingrown toenail, or your sexual orientation. Remember, snowflakes don't think other people have feelings, and trying to force them to feel empathy makes them uncomfortable.
  8. Lie and tell them you know they are working hard. Snowflakes all think effort is the same as product when it comes to grades, and this also means they think if you say "I know you are working hard" it means the same as "you are doing well".
  9. Do something fun in the class before the evaluation. Students have memories like goldfish, so you need to give them a positive memory close to evaluation time. You can even do this on evaluation day, if you aren't TOO obvious in your pandering.
  10. Combine 8 & 9. Say "I know you have been working hard, so I am going to end class 15 minutes early just this once." I once got an awesome evaluation by giving my students a 15 minute coffee break before the evaluation.
I know that looking at my list you will notice that a lot of my suggestions involve lying, and maybe you see that as a paradox: how can you reconcile your concept of professional ethics with systematic dishonesty? It's a bastard, I admit.

13 comments:

  1. There are several good assistant professors in my department who go up for tenure in the next year or two. To get tenure, they are required to get student evaluations that are higher than average, for our department. Nothing we've said to our provost about how this is mathematically nonsensical has made any difference.

    (He simply cannot understand that if sustained over any length of time, this would mean that eventually all assistant profs would have to get perfect scores, just like at Lake Woebegone. But then, Ponzi clearly never did understand why people got so upset about his schemes, despite having been jailed repeatedly for them.)

    I have tenure, and quite a bit of seniority and external research funding. Is this not, then, an incentive for me to get the very worst scores on my student evaluations that I can manage? It will help my junior colleagues, after all.

    Oh boy, am I going to have fun with this one!

    ReplyDelete
    Replies
    1. I am reminded of the scene in "Animal House," where the guy says, "Oh boy, is this GREAT!" I sure wish Greta could be here.

      Delete
    2. I think you should try the experiment (perhaps by reversing as many of WhatLadder's suggestions as possible), and report back, here and/or in a published paper.

      And I hear you on administrators' numerical/statistical illiteracy. I'm an English proffie who never took statistics of any kind, and even I understand the problem with that approach (and the somewhat similar one some of our administrators take). Somebody is always going to have to be in the bottom 10, 25, 50, 75, etc. percent. In some cases, being in the lower numbers signals a problem with the individual. In others, it signals great strength in the whole group. Commentators have been pointing out similar problems with US News rankings' fixation on how many students at a university come from the top 10% of their classes in the wake of the unranking of George Washington U due to reporting of concocted data. In some schools, being in the top 10% of your class is meaningful. At other elite public and private high schools, being in the bottom 5% of the class might still rank a student in the top 5% of high school graduates nationally (depending, of course, on the ranking criteria).

      Have you tried asking the administrators which group of professors they want and/or expect to see ranking in the lower 50%? Maybe trying to answer that question would help them get their heads around the problem?

      Delete
    3. The other key issue, of course, is the distribution curve. If all of the ratings are lumped quite closely together toward the top of the scale, resulting in a tall, narrow, curve then it doesn't really matter whose ratings happen to fall on the lower side of the median. If the curve is flatter overall, or if it is high with a long, low tail on the lefthand side, then it makes more sense to look at the teachers whose ratings regularly fall in the lowest grouping, to see if something might be wrong. Mind you, it might not be (or the problem, or at least the majority thereof) might be with the students with whom that teacher works, but I can see a rationale for at least looking at the situation. When everybody is bunched together (even if they were bunched together toward the middle or even the lower half of the scale), I don't.

      Delete
    4. "I think you should try the experiment (perhaps by reversing as many of WhatLadder's suggestions as possible), and report back, here and/or in a published paper."

      OMG this is an awesome idea. Some of them might be a bit hard to do, but I would love to hear what happens if Frod tries either taking away candy from students who brought it to class, or giving out, like, broccoli and brussels sprouts.

      Delete
  2. I think the suggestions are great. If the administration is going to make staffing decisions based on the evaluations of less than bright snowflakes, then tilting the odds in your favor by something as trivial as giving them a coffee break before the evaluation will give the admin flakes the results they deserve.

    ReplyDelete
  3. This is good stuff. Thanks for the laughs. Taking the edge off the fugly, though, that's a tall order.

    Another approach: I tell my students, "I have tenure and a strong self-image. Evaluate as you please. I'll read your comments and take them seriously, probably sometime next year after I've forgotten your handwriting."

    Works like a charm. It's a little like issuing a minor insult to someone you're trying to pick up at the bar. They are flattered I take them seriously, yet they also know I don't care too much and I am already planning to forget them.

    ReplyDelete
  4. I don't have tenure, but my department does, finally, have a beginning-to-be-established renewal/salary/promotion rating system for NTT faculty that explicitly assigns student evaluations a particular weight (20 or 25%, depending on whether we had a class visit by a colleague that year). While this hasn't entirely eliminated my anxiety over evaluations (higher-up administrators seem to feel free to ignore department assessments/recommendations, and some of them are very much in love with their spreadsheets), it helps. I aim to hit the "satisfactory" mark in the evals (which isn't too hard, since overall ratings in my many sections are averaged, smoothing out the often high variability among sections), and concentrate on scoring higher on things that are more under my control (assignment/syllabus design, quality and frequency of feedback, etc.). It's by no means a perfect system, but it's a major improvement over the one we had before, where there was no clear guidance as to exactly what should be examined, and/or given what weight, and evals were the only hard numbers in the mix. In the present system other things, like feedback and curriculum design, are rated on a scale, and the overall rating is expressed as a number as well as in words, so there are other numbers, representing the professional judgment of TT department colleagues, for the number-oriented to latch onto. The only complaint we got back from the next administrator up was that much more than 50% of the NTT faculty were receiving a rating that basically meant "way above average," which made no sense (yes, said administrator is more numerically literate, at least when it suits hir purposes, than the ones with whom Frod deals). We changed the wording to something without the comparative element implied -- basically, "really, really good" -- and ze seems to be happy.

    If you're in a position to suggest/lobby for/implement such a system, for faculty of any rank/tenure status, I strongly encourage it. It puts student evals in their place (as one measure that may provide some indications of teacher quality, albeit mixed in with a lot of "noise" generated by students' unrealistic expectations, emotional reactions, and inability to predict what will actually be helpful to them down the road), and also provides a chance to discuss what else the department or program values about teaching, and to figure out how it might be measured (at least to a degree about as imperfect, but in different ways, as the evals.) Measuring the value of teaching is always going to be a difficult and potentially time-consuming endeavor, and all systems devised to do it are going to be imperfect, but even adding a perfunctory review and rating of syllabi, assignments, exams, etc. by another instructor qualified in the same general area as the instructor being rated, and giving that opinion at least as much weight as the student evals (and preferably considerably more) would be an improvement on a system based on evals alone.

    ReplyDelete
    Replies
    1. Our faculty union lobbied for all this stuff for several months last year, with the result that their recommendations are now being "considered" by some committee of who the hell knows.

      Delete
  5. I have a somewhat controversial/YMMV suggestion on how to improve evaluations, but here goes anyway. If you can stomach it, take a jog by the site that shall not be named and do some "maintenance" work. That is, report any offending reviews and add then add some of your own.

    I've found that my institutional evaluations often parrot my online reviews from past semesters. And I don't mean they "correlate." I mean that students recycle the stuff they've read online--sometimes word for word. I have institutional evaluations that use the exact same language from reviews I got months or years ago online. Snowflakes apparently don't just plagiarize term papers anymore--they plagiarize evaluations, too. And this holds true for positive reviews, not just negative ones. So I figure that I might as well be the one telling them what to say. Make me look good in my dossier, kids. That's right, I'm tough but fair and the most dynamic, splendid writing teacher you've ever had.

    ReplyDelete
    Replies
    1. I work in the other direction: I sometimes paraphrase comments that I received in the official evaluations, and that I thought were positive but fair, and post them on The Site That Shall Not Be Named myself. Or I post some useful advice (e.g. "just keep up with all the steps of her assignments and you'll be fine") and accompany that with a positive numerical rating (helpful, well-organized, tough). Perhaps I'm setting up a positive feedback loop? If so, I'll take it, especially since I'm pretty sure I could find some research showing that students' attitudes toward a class affect their learning (and, since I teach a required class that many students come in convinced they shouldn't have to take, some attitude adjustment is often in order).

      Aside from that, why bother? Well, I may need to go on the market again some day, and I figure a record of solid if not stellar public evals won't hurt, and might even be necessary. Also, I suspect that administrators at my institution care at least a bit about professors' ratings (they sure care about the US News ones).

      And is it ethical? Well, there's a definite "caveat emptor" involved for anyone who uses the site, student or potential employer. Anyone smart enough to get into college (or to run one) ought to know that the data-gathering method is extremely unreliable (but I also understand why potential employers feel the need to at least check the site, just as I understand why they feel the need to google potential hires, if only to make sure that a dean -- or a student, or reporter following up after some gaffe or scandal -- doesn't find something that they could easily have found). I've met a reasonable number of professors who admit to monitoring the site and asking that negative reviews be taken down (and apparently that's pretty easy to do; I've never tried it). I've met fewer who admit to submitting reviews themselves, but I very much doubt Gone Grad and I are the only ones, and it seems to me that the effect would be similar. I don't think I'd bother if I were in a tenured post and feeling secure, but that's probably more a matter of what I feel is worth spending my time on (cultivating my online reputation, at least at that level, doesn't come high on the list). I guess I'm acting more or less on the same principle that I suspect the more rational potential employers do: that one has to expect to be checked (and to check), so whatever is there might as well be at least modestly supportive (and also give students as realistic as possible an idea of the class, the teacher, and how to succeed with both).

      Delete
    2. "Snowflakes apparently don't just plagiarize term papers anymore--they plagiarize evaluations, too. And this holds true for positive reviews, not just negative ones."

      HOLY CRAP!!! How much further can the bar be lowered? These self-important, entitled *&^@#$ think they have an axe to grind but can't be bothered to actually compose a poison pen review.

      I suppose doing so would be too much like work. It requires thought, i.e. "What didn't I like about the class? And I have to provide examples???? How would I improve the class? And they want more *****in' examples?? Daaaaaaaammmn! They're makin' me express myself! It's so haaaaaaaaard!"

      Delete
  6. "The other key issue, of course, is the distribution curve. If all of the ratings are lumped quite closely together toward the top of the scale, resulting in a tall, narrow, curve then it doesn't really matter whose ratings happen to fall on the lower side of the median."

    THIS. Why don't people realize this? For the first actual course I taught and coordinated, I thought I received pretty good evaluations. I got all 4s and 5s (good or excellent rankings). Note that I was also assigned this specialized, forth-year class a whopping two weeks before it started (I'm a sessional), I had no notes or anything from whoever taught the course previously, and it was in a subject area outside my area of expertise. I put a massive amount of work into it. However, I was still 'below average' by some tiny amount (0.1 or something like that). Why does the department chair think that 0.1 point actually matters when nearly all the instructors (including me) have scores clumped in a very small range? It's annoying.

    I do like that the smaller campus I teach at now has other department members sit in on my lectures to evaluate them. That is very helpful.

    On another note, this term, I had one student ask me twice to dumb down my lectures. This is for a university-level course. What the hell? I'm glad someone other than some 19 year olds give me input on my teaching now.

    As far as the article goes, I have never lost my shit on anyone but if a bunch of students are having a loud conversation while I am trying to lecture, I just tell them to please shut up. It works just fine. If I don't do that, the students who actually want to learn will (justifiably) complain if I let all the noise get out of control.

    ReplyDelete

Note: Only a member of this blog may post a comment.