Tuesday, September 18, 2012

Bad Data That Leads to the Wrong Answer. From the NYTimes.

by Stuart Rojstaczer

Student evaluations are a poor indicator of professor performance. The good news is that college students often reward instructors who teach well. The bad news is that students often conflate good instruction with pleasant ambience and low expectations. As a result they also reward instructors who grade easily, require little work, are glib and chatty, wear nice clothes, and are physically attractive.

It’s generally impossible to separate all these factors in an evaluation. Plus, students will penalize demanding professors or professors who have given them a bad grade, regardless of the quality of instruction that a professor provides. In the end, deans and tenure committees are using bad data to evaluate professor performance, while professors feel pressure to grade easier and reduce workloads to receive higher evaluations.

Student evaluations can be useful when they are divorced from tenure, retention and promotion evaluations. If a professor asks students to anonymously provide information on what works and what doesn’t in a classroom and come up with suggestions for improving the class, students can often provide valuable feedback. But this kind of information is essentially constructive criticism, an anonymous dialog between the professor and student. It shouldn’t be transmitted to higher-ups in university administration.

MORE OF THIS ARTICLE.

A LARGER DEBATE ON THE ISSUE.

11 comments:

  1. While it seems glaringly obvious to us, we should give Mr. Rojstaczer props for even suggesting in print with his name attached that adminiflakes have no business caring about what students think of their instructors. This strikes me as a fairly radical position in an era when everyone and their third cousin is calling for more, not less, supervision -- and, correspondingly, more supervisors to keep tabs on people -- in education at all levels.

    ReplyDelete
  2. Like Edna, I'm glad to see the subject aired in a major news organ, read by both parents and employers (sometimes the same people) with considerable influence, and appreciative of Rojstaczer for bringing it up. However, I have to take issue wit this idea:

    In upper-division courses, inferences can be made on instructor quality from the outcomes of graduates in the major area of study. One can examine whether a professor has inspired an unusual number (either large or small) of undergraduates students to choose careers in that professor’s subject area. Such outcome-based measures would require extra work, but they would also tend to be fairer and ultimately more informative than the bubble sheets filled out by students today.

    This isn't going to work very well for the humanities in general, where the major doesn't lead directly to any one career (but can provide excellent an excellent foundation, in the form of intellectual skills, for a wide variety of careers). There's also the fact that responsible humanities professors these days think many, many times before encouraging promising students to follow their own career path, since, to all intents and purposes, it no longer exists.

    That said, there are still good ways other than evaluations to measure student performance. The performance-at-the-next-level one would, indeed, probably work for some intro classes, probably more in the STEM fields. And some sort of portfolio grading, perhaps with entry and exit samples, can tell you something about what a department, if not an individual, accomplishes over the course of 4 years. But it's hard to measure development in the humanities, especially since students tend to fall apart in one area (e.g. grammatical structure/correctness) when they're making major progress in another (complex thought), and the catching-up process doesn't fit neatly into a semester, or take place in, or as the result of, a single class. Often, our best students' assemblages of growing intellectual skills resemble adolescent labrador retriever puppies: energetic, exciting (and excitable), with the component parts decidedly out of proportion with each other, and subject to pratfalls and assorted other mishaps, none of them all that serious unless we take them to be, along the way. Both student evaluations (which favor the professor who can make students feel that they've finished/mastered something in the course of the semester) and frequent standardized testing (which embodies and imparts the idea that students should be advancing in a neat progression, preferably in lockstep with each other) work against pedagogical practices that foster the messy process of true intellectual development.

    ReplyDelete
    Replies
    1. "One can examine whether a professor has inspired an unusual number (either large or small) of undergraduates students to choose careers in that professor’s subject area. "

      I actually went way beyond taking issue with this idea when I read it. I call outright bullshit on it. It makes me responsible for the student's actions, decisions and accomplishments (for which I really shouldn't take credit) or lack thereof (for which I refuse to take blame).

      I have a former graduate student who decided that they really didn't want to spend their professional career analyzing hamster by-products, and instead wanted to raise a family on Baffin Island. So they did. They are currently very happy, and I'm quite sure that this counts as a black mark against my career.

      Delete
    2. "One can examine whether a professor has inspired an unusual number (either large or small) of undergraduates students to choose careers in that professor’s subject area. "

      But what about fields like my own, astronomy, in which the job prospects are stinky? Frankly, I think I inspire too many students. I do not like making a living as a vampire. Did you know that Ponzi himself clearly never understood why people got so upset about his schemes, even though he was repeatedly jailed for them? I therefore always make good and sure all my students know about how few jobs in astronomy there are, even despite the field being so intellectually lively. They never listen.

      Delete
  3. If you're in need of an emetic and/or an excuse to defenestrate the device on which you're reading, try reading the contribution from Jeff Sandefer of the Acton School of Business entitled "Our Motto: Give the Customers What They Want". Really disturbing.

    ReplyDelete
  4. I agree that it's nice that this "debate" is happening in a major publication, but these articles are rather light on evidence and largely written by the upper-class in academia: a SLAC college president; a SLAC dean; a business school founder; and a former professor of geology and civil engineering. Only the authors of the last article appear to have actually *researched* the topic. And they appear to have empirically proved that inexperienced faculty are inexperienced. IMHO, clearly in the "water is wet" category of analysis.

    Expanding beyond the linked article to the others in the debate, is that Acton guy not freakishly scary? Can we bury him under a shed somewhere? And the Dean who thinks that students are "experienced learners" is a Dean . . . and I don't mean that as a compliment!

    ReplyDelete
  5. I've found that if I do an informal "constructive criticism" poll in the middle of the quarter, and then change at least one thing based on that criticism (which is often very reasonable), my evals soar. Make of that what you will.

    PS Cassandra: "students tend to fall apart in one area (e.g. grammatical structure/correctness) when they're making major progress in another (complex thought)." YES YES YES they do. I never heard anyone else confirm this but am always telling demoralized students that this is what happens.

    ReplyDelete
    Replies
    1. @F&T: I can't take credit, since it's something that I first heard articulated by my (comp) program director, though it certainly rang true for me. I'm not sure whether it's common comp wisdom or not (since I'm not formally trained in the field, just all-too-experienced);I think she ends up citing the phenomenon a lot in explaining to non-English/comp faculty why, no matter how well we teach it, freshman comp can't serve as a prophylactic against any student ever annoying a prof in an upper-level class with tangled and/or ungrammatical prose.

      Delete
  6. Amongst my problems with the weight attached to student evaluations is the fact that students' perceptions of proffies/classes shift over the years. I get evaluations complaining about EVERYthing: me, the course, the fact that the course is required. Yet, every year, an old student will drop by my office -- or interrupt my post-work beer'n'read -- to tell me that the course they hated turned out to be the most useful course they took.

    That's not even mentioning that if I adopted the tone with students that they use in evaluations of my performance, I'd have helicopter parents swarming me like wasps.

    ReplyDelete

Note: Only a member of this blog may post a comment.