My wife – a special education school teacher – and I have a bit of a policy disagreement regarding teacher merit pay.
Most can agree that our Education System is broken (see anything by John Taylor Gatto) and some sort of reform is necessary (even if it rankles some of the President’s core constituents). Sometimes that change can only be motivated through monetary incentives. On the macro-level, I can see the potential benefits. On the micro-level, it’s likely that her students may not ever achieve sufficiently for her to earn said bonuses.
While I’m not necessarily opposed to merit pay, even in a Union environment, I have a problem with the mechanism for earning those incentives, and how there could be a long-term misalignment between the outcome expected (educated students) and the data measured, distorted by short-term individual gain (executive bonuses). Basically, an administrator can ‘work the numbers’ to earn a bonus, totally above board, and still fail at the long-term mission of education.
I’ve already heard stories of local districts being accused of monkeying with metrics, and also heard others regarding services being in-sourced or out-sourced solely for monetary savings and irrespective of the quality or appropriateness of service. The spreadsheets certainly distilled the data to highlight what was best.
On the surface, a data-driven approach dissolves many of the existing problems by being totally objective. If 500 students out of 800 go to college, that is a 63% advancement rate. Dispassionate statistics are immune to political and ideological pressures. There is no room for argument – data and spreadsheets don’t lie. Unless we ask them to…
Consider the case of the Philadelphia Police Department and the under-reporting of sex crimes (rape) as exposed by the Philadelphia Inquirer:
“Going down with crime.” For years, that was the phrase among Philadelphia police about their culture of minimizing or dismissing complaints from crime victims. This practice kept crime statistics low, thereby improving the department’s image.
In articles and series published in recent years, The Inquirer examined that long-standing practice – especially the “downgrading” of rapes.
Former Police Commissioner John F. Timoney eventually acknowledged that many rapes and other sex offenses had been improperly classified and had received little or no investigation. The department eventually admitted that its sex crimes unit had misclassified 1,822 crimes dating back to 1995.
It’s unclear if the downgrading happened on the beat, at the precinct, or at the roundhouse.
There is little incentive (beyond avoiding work) for unionized, uniformed officers to downgrade statistics, as the metrics would likely not aid them in promotions, and would not receive bonuses in this fashion outside of their contract. It’s also unclear if senior uniformed or civilian management had any bonus monies, incentives, or promotion opportunities tied to the statistics. It’s hard to imagine how PPD leadership wouldn’t have some sort of stake. It’s also fair to consider the political pressures that might have been at play.
Most depressingly, by reflecting dropping crime rates and promoting their own performance, they were likely reducing state and Federal aid in both manpower and money, leading to a more dangerous environment for both Police Officers and citizens. This was more than a Philadelphia problem, with the issue appearing on University and College campuses as well as other towns and municipalities.
The PPD subsequently employed CompStat to evaluate the metrics, but they are still initiated by people and paper.
The same questions were asked about George W. Bush’s Texas Miracle which subsequently became “No Child Left Behind”, and are now being asked of Obama’s Education Secretary Arne Duncan.
Yes, metrics are important, but they can’t be everything, and they can distort and obfuscate just as much as they can lend clarity. What you decide to measure (teacher performance) – and what you choose to ignore (administrative performance) – are subjective choices. If you make uninformed choices at the onset, your errors are compounded at conclusion.
Further, who understands all the various coefficients, powers, and standard deviations? Statistics was my single worst subject, bar none (okay, not really, you can throw Chemistry and Physics in their too. And advanced maths. All of them), and I know I’m not alone. Does the citizenry even understand the problem:
[ARNE] DUNCAN: Here’s another Gallup result that I think is fascinating. This is the most remarkable finding. Everyone thinks their own school is good and that everybody else’s school is bad. That’s a constant theme. (See Tables 2, 3, and 4 on Page 11.)
KAPPAN: Why do you think that exists?
DUNCAN: Too many people don’t understand how bad their own schools are. They always think it’s somebody else’s kid who’s not being educated. They don’t understand that it’s their own kid who’s being short-changed. That’s part of our challenge. How do you awaken the public to believe that your own kid isn’t getting what they need and you don’t know it. If they would wake up, they could be part of the change. We need to wake them up.
I wouldn’t go so far as to say they don’t know their school is bad. I would say that they have no idea which data points are important, and that they would prefer those who do to make the best decisions possible. Education Reform is a little like the Federal Deficit, everyone knows its important, but no one really knows why, what to do about it, or how it affects them.
The ubiquity of computing devices, always-on-internet, and a fire hose of data will make it even more difficult to separate the wheat from the chaff, or even easier to cherry pick or work the data. We can’t be data driven without being data literate.