Educators across New York state were shocked and alarmed by last week's decision by the New York City School District to release the scores earned by 18,000 teachers.
The reduction of a teacher's entire year's performance to a single numerical score is the result of the New York State Education Department's effort to comply with President Obama's "Race To The Top" initiative, which demands greater accountability from educators and administrators. The scores are calculated by a complex "value-added" formula, that supposedly takes into account student performance, student growth, and a variety of other factors.
And of course, the reaction by teachers to this public release of individual data has made the anti-education crowd howl with triumph. The teachers don't want the data released, they say, because it finally points a finger at underperforming educators, who should either fix what they're doing or else find a new career. The unions are complicit in this desire for secrecy, because they don't want to reform tenure law to make it easier to get rid of bad teachers. Why, they ask, shouldn't taxpayers get a chance to see the job evaluations of the people whose salary and benefits they are directly subsidizing?
Well, I've talked to a great many of my colleagues about this, and let me correct a few misapprehensions. First of all, no teacher I've ever met has ever stated a desire to retain teachers who aren't doing their job. In fact, more than one have expressed resentment against "rubber rooms" and other mechanisms sometimes employed to keep bad teachers on the payrolls. It gives, they say, a bad name to the whole profession.
Most teachers are afraid to have the data published for an entirely different reason. It is that they do not trust NYSED to score them fairly.
Here is an agency that spent, last year, 13.7 million dollars to develop, print, and distribute the Regents examinations statewide, and still can't write a test that meets any kind of reasonable minimum standard for reliability. As an example, a few years ago, there was a pair of questions on the "Living Environment" (Biology) Regents exam that went something like the following (I cannot find the specific questions, so am having to rely on my memory - but you'll get the gist):
31. Asexual reproduction produces offspring that
a) are exact copies of the parent.
b) contain a mixture of genes from both parents.
c) have many dominant genes.
d) are better adapted than the parent was.
32. Based upon your answer to #31, the organism that has the slowest rate of evolution is
c) oak trees.
Well, almost everyone got #31 correct; we make a major point of the fact that sexually reproducing species have varied offspring, and asexual species produce clones. But what, then, is the answer to #32?
The answer that keyed as correct is (b) bacteria. I remember checking the key more than once, thinking I'd misread it. Surely they can't be implying that solely because they reproduce asexually, bacteria aren't evolving -- that the method of reproduction is the only thing that controls the rate of evolution?
Yes, in fact, that is exactly what they were implying. Despite abundant evidence that bacteria evolve rapidly (witness the appearance in recent years of antibiotic-resistant strains of everything from staphylococcus to TB), this test claims that humans are evolving faster because we reproduce sexually. So I emailed the science specialist at NYSED, and asked if there was some mistake. Within hours, I got back a highly snarky response, that they had "consulted their experts" and the answer stood. I had to mark students wrong who correctly identified humans as evolving more slowly than bacteria.
In fact, the tests are so poorly constructed, and the questions (and therefore the allowable answers) so open to interpretation, that NYSED is no longer allowing teachers to grade their own students' Regents exams. Even more telling -- there is now a recommendation by NYSED that Regents exam scores not be used in a student's final grade calculation. How's that for a vote of confidence in the reliability of their own exams?
So I think New York state educators are to be excused if we lack confidence in NYSED's ability to design a fair assessment. I'll go even further; I believe that the scores that they are assigning to teachers are completely meaningless. Teachers have no control over what students they teach; no control over the poverty level, home life, and other outside stressors their students experience; and little to no control over what kinds, and how many, subjects they are assigned to teach each year. The idea that given that number of variables, you could find a single measure that could equally reliably represent the performance of a teacher of AP Calculus in a wealthy school in Westchester County and a 6th grade special education teacher in a poor school in the Bronx is ludicrous.
But that, unfortunately, is exactly what the micromanaging b-b stackers at NYSED have done, and that is what is being released to the public. Imagine if we did the same to our students -- gave them no feedback of any kind for most of the year, nor informed them how their grade was to be calculated -- just waited until the end of the year, and then assigned, on a seemingly random basis, grades between 0 and 100 for everyone.
That is, in effect, what NYSED is doing. And you wonder why teachers don't want this made public. If your own job evaluation was done in this fashion, and by people with this kind of track record for reliability, would you want it published in the newspaper?