Gary Rubinstein has recently published two blog posts dissecting the data for VAM teacher evaluation in NYC. Check out Part 1 and Part 2 on his site.
A hard lesson-learned for most engineers early in their careers doesn’t seem to have translated to the reform movement very well… data is amazing and we should use it to guide decisions to the greatest prudent extent, but more more important than what data we have is what we know about it’s limitations. I know I’ve made a few rash errors in the cubicle; in my haste to get to the right answer I treated as gospel a dataset which turned out later to be highly suspect. Fortunately I hadn’t published the papers yet, I only ended up having to redo a lot of work. Lucky.
Core rule of the engineering world: you can almost never directly measure what it is you want to know. You must infer results based off of imperfect measurements of associated variables. That is a process that comes with a varying degree of uncertainty, bias, error, and false assumptions. The right thing to do is to explore what data can tell us, experiment and validate using simple and solid areas of known science, then build up models from there with increasing confidence.
In engineering, the consequences of such an error in judgement can be dire — bridges collapse, space shuttles explode, levees fail, weapons misfire, or a lot of money gets burned. These are serious indeed, but in education, the consequences seem potentially horrifying. We’re talking about the education of children, and by extension their future livelihoods and those of their children. The future society and workforce of the nation and world. Not to mention the careers of many thousands of teachers.
“Not everything that can be counted counts; not everything that counts can be counted.” –Albert Einstein.
I don’t get it. What is the root of the contemporary popular belief that if something can’t be quantified it has no value? Or more dangerous, that if the computer spits out a number, it must be right?