Last week, this video from The Onion (asking whether tests are biased against kids who don’t give a sh^%t) was going viral among the education social networking geeks like me. At the same time, the conversations continued on the Los Angeles Times Value-Added story, with LAT releasing the scores for individual teachers.
I’ve written many blog posts in recent weeks on this topic. Lately, it seems that the emphasis on the conversation has turned toward finding a middle ground – discussing the appropriate role for VAM (Value Added Modeling) – if any, in teacher evaluation. But also, there is renewed rhetoric defending VAM. Most of that rhetoric seems to take on most directly the concern over the error rates in VAM – and lack of strong year to year correlation between which teachers are rated high or low.
The new rhetoric points out that we’re only having this conversation about VAM error rates because we can measure the error rate in VAM, but can’t even do that for peer or supervisor evaluation – which might be much worse (argue the pundits). The new rhetoric argues that VAM is still the “best available” method for evaluating teacher “performance.” Let me point out that if the “best available” automobile burst into flames on every fifth start, I think I’d walk or stay home instead. I’d take pretty significant steps to avoid driving...
http://schoolfinance101.wordpress.com/2010/09/01/kids-who-don%E2%80%99t-give-a-sht/