Monday, August 8, 2011

Yet more on Assessment Methods and Their Effects

It sounds like this has been being investigated for a while, but I'm just becoming aware of it: there's apparently evidence of widespread cheating in Atlanta schools -- by teachers, not students -- on high-stakes tests. Of course no one was forced to cheat -- and plenty of people didn't -- but the situation does raise questions about methods of assessing both student progress and teaching, and how teachers and other "educators" respond to them, especially when the expectations a unrealistic.

A couple of examples of coverage and commentary:

Cheating Report Confirms Teacher's Suspicions

Paul Frysh, CNN, Aug 8, 2011

"I started believing that I wasn't a good teacher," Rogers-Martin says. "Other teachers were coming in with these perfect scores and mine are not so perfect. I mean they weren't bad, they were just normal." . . . .Some children with the highest rating on the previous year's test -- "exceeding expectations" -- arrived in her class completely unprepared for the coursework. . . .

Teachers, some faced with unreasonable targets, were cajoled and scared into cheating and were threatened with being placed on a Professional Development Program or PDP, she says. Though it sounds innocuous enough, teachers understood that a PDP could be the first step in losing their jobs, she and other Atlanta teachers said. "I have a husband who has a good job, so I could quit. I could say: 'There's no way! I'm not going to do this,'" says Rogers-Martin. "If I were a single mother and I had two kids' mouths to feed and this was my only job, I would hate to think what I would do -- I don't know." . . . .

It is not even clear that standardized tests are a particularly useful measure of student learning, says Marion Brady, a lifetime educator, former college education professor and author of "What's Worth Learning." . . . "There's a whole range of thought processes: making inferences, generating hypotheses, and generalizing and synthesizing and valuing ... that every human being engages in every day, and nobody knows how to test them (with standardized tests)." When a country builds its education system around standardized tests, and standardized tests are incapable of measuring what needs to be measured, says Brady, "it's a recipe for catastrophe." . . . .

The amount of weight put on tests for assessing not only students but teachers, administrators, schools and even whole states is what led to cheating in Atlanta on such a huge scale, Rogers-Martin says. Kids from underprivileged backgrounds can succeed on these tests as well as any kids, but "it's going to take them a little more time (on average) than it is other children. And there's nothing wrong with that."

A Teachable Moment from Atlanta's Teaching Scandal

Ross Wiener, Washington Post, Aug. 7, 2011

It’s important to understand that cheating in Atlanta was systemic and pervasive. Eighty-two of 178 educators implicated in the investigation admit cheating; misconduct was documented in 44 of the 56 schools examined (the entire district is 100 schools). One school organized an “erasure party” where teachers and administrators created a social occasion out of illegally and immorally faking their students’ test results. . . .Worried that the scandal will undermine test-based accountability, reform advocates are diminishing its significance. Just a few bad apples, they argue . . . .But their critique fails to recognize that what went on in Atlanta most assuredly was not a case of individual bad actors. The problem is that the system encouraged and abetted the violations in ways that we do not yet fully understand. Good people — many of them — resorted to reprehensible behavior in Atlanta Public Schools.

Ironically, the recent focus on more rigorous teacher evaluation should help on this issue. While there is some danger that increased pressure to improve test scores will push more teachers to cheat, the new policies are pushing school systems to build essential capacity. For the first time in most places, districts and states are explicitly describing the practices in which they expect teachers to engage. To support evaluations, states and districts are developing detailed rubrics that articulate how teachers should plan, engage with students and parents, instruct, and assess student learning. Observations against these rubrics create higher-quality, more actionable information about what’s going on in classrooms. Teachers finally can get meaningful feedback and guidance, and alarms can be raised when results are strong but instruction is weak.

As I wrote in response to Cynic's recent post, there's a part of me that wouldn't mind if my students faced a high-stakes assessment conducted by someone other than me, since that would allow us to work toward a common goal rather than get into wrangles over grades. And I might well be better off being assessed on the basis of students' actual progress rather than on how they feel about the class. But the stakes really would have to be at least as high or higher for them as for me (which I don't think is the case for most of the NCLB testing, at least not in the short term), or I wouldn't want to be held responsible for their results. And while I believe it is possible to identify and share best practices, Wiener's vision sounds a bit too reductionist and formula-driven for me. I'm not sure good teaching, any more than the sort of complex learning described by Brady in the first article, can be so easily measured.

4 comments:

  1. What I fear is that with increased testing comes (seemingly) inevitable cheating, and then more restrictions on education, because if students fail to reach standards, it must ALWAYS be the fault of the teachers, right?

    And in some disciplines, standardized assessment (which would be ideal for outsourced grading), just don't cut it. How does one, for example, assess a campus project that only people on that campus know about, or a piece of art created by students for a specific purpose known only to that class? Since I teach writing, some of my final projects are not traditional research papers, but are a culmination of a series of projects students have worked on the whole year in a sequence of courses. How does one grade work that one hasn't been privy to all year? Maybe my assumption that only standardized assessments are able to be outsourced is an erroneous one.

    It starts from way before we ever get them, doesn't it, CC?

    ReplyDelete
  2. @Cynic: yes, I'm pretty sure that only standardized assessments can be outsourced. Admittedly, there are a few standardized assessment tools that are, as far as I can tell, pretty good; the AP and IB tests come to mind. But they're expensive to administer and grade, and, at least in the case of the APs, are graded by bringing a bunch of people who teach the class together in a centralized place for pre-grading and ongoing norming. Even then the results aren't perfect. And the tales we hear from those who worked in standardized-test-grading mills make it clear that even when teachers don't cheat, the results of less carefully-graded tests are suspect.

    And yes, we're increasingly getting students who have been almost exclusively been "assessed" this way throughout their K-12 careers, leaving us in much the same position as Rogers-Martin: pointing out that our supposedly competent, or even expectation-exceeding, students are anything but.

    I fear that within a generation, if not already, being taught by a professor who designs and grades his/her own assignments regularly adjusting approaches as students' needs change (or even by a group of colleagues who work together on either or both tasks) is going to be a luxury limited to the very rich (and a few fortunate scholarship recipients).

    ReplyDelete
  3. I just finished the summer "B" session, teaching calculus I. Summer sessions of this class are always "interesting" since you get two general levels in terms of the abilities of students: those who can handle the material and then some; those who are taking the course for the third time.

    I had one student who withdrew from the class after the second week into the five week semester. She needed a signature from me on a withdrawal form and when I met her before class in the department office, she told me that she believed that she was prepared for the course (she received an "A" in pre-calculus) but realized that, by the second day of my class, which was at that point a review of pre-calculus, that she was not prepared.

    Not only was she not prepared, she realized, but, her instructor in pre-calc did not even finish the list of topics on the syllabus. He was, as she said, a really nice guy and a very lenient grader and everyone likes him. He gets great reviews on ratemyprofessors and people take his class because of this.

    I, on the other hand, am universally hated on the same site because I am not a lenient grader and I give "ridiculously hard" exams. However, the one thing I always tell my students is: the next course they take, if a topic covered depends on material I was supposed to teach, they will be prepared. If Prof. X asks little Jane Doe why she can't integrate a function by substitution, she is not going to answer "Prof. Humungous never taught us that!"

    ReplyDelete

Note: Only a member of this blog may post a comment.