Currently, we have VPs assess our ability every three years or so (if they have time). All of my evaluations were good except for one. I had a VP watching me teach an essential-level English class. I introduced the topic, modelled the assignment, and let them work on it independently. One girl finished first, and she asked me how to figure out what mark she needed on the exam to pass the course. She was sitting right beside the side board, so I showed her on there. Apparently that was a flaw in my teaching. According to the VP, I should have the students focused on the singular topic at hand, and not let them get distracted by anything else. Yes, even if they're done all their work and even if it's me "distracting" them.
I disagreed with her position. I think off-topic conversations with individual students are beneficial to the classroom environment - not to mention to the student. But it doesn't matter that I disagreed because she's the judge and jury. So she had to come again a different day, and I was careful to stay on topic with everyone for the entire period.
So what can teachers do to demonstrate that they're effective in the classroom in a way that's transparent to the public and that actually measures their ability to convey information, make the information relevant, help struggling students understand it, and motivate students to excel?
I think something that could work - and in more ways than one - is to allow colleagues to evaluate one another randomly and anonymously. First the rationale, then the logistics:
I know I do a much better job at evaluating student seminars when I get the entire class to evaluate them also. The kids are typically far more cut-throat than I would be, and they're clear about what their peers should be doing to improve. Teachers, especially somewhat anonymously, can offer the clearest idea of what is excellent and what needs work. We're on the front lines, and we, more than anyone, can best evaluate teacher practices.
But this idea actually came from my desire to see how other people run their classes. Teachers often teach in complete isolation. And what better way to really get your head around new pedagogy than watching it in action. A 30-minute observation session can go so much further than hours spent discussing anchor charts ever could.
What about teacher bias? If scores are averaged, and comments meshed together it would help keep each assessment anonymous. Hopefully that's enough to weed out any bias born of not wanting to offend. And after seeing many teachers in action, just like after watching student presentations, teachers will get better at recognizing what works and what doesn't. AND the bonus part - and why I get students to evaluate each other's seminars - is that evaluating other teachers can help the evaluators reflect on what they, themselves, are doing right and wrong in the classroom.
But what if my evaluation sucks? There's an old saying my dad used to say (and I can't find a source anywhere online): If one guy says you're a horse, he's probably crazy. If a second guy says it, maybe there's something wrong with him too. But if a third guy tells you a horse, then you better start looking for some hay for dinner. If several teachers suggest that there's something wrong with what I'm doing, then maybe there is something wrong, and I should take a good hard look at what I'm doing. And even if the evaluation is excellent, the comments will suggest something I can do to improve. It has the potential to dramatically improve teaching practices across the board both from observing other and from reading feedback on your teaching.
And it will introduce teachers to people they might never have spoken with otherwise thus fostering greater school community. This is especially true if, instead of coming in randomly, evaluators try to work out a best time with the teachers they visit. But if they show up to classes without warning, then they'll get a much truer picture of how the teacher teaches than when teachers fancy it up for VPs.
I'm thinking something like ratemyteacher.com except more intentional and thoughtful. If every teacher gives up 30 minutes of a prep period 8 or 9 times a semester (18 times a year) - once every other week -- to watch another teacher, then they'd be giving up about 9 hours to do so. To compensate for this 9 hours of time lost from prep periods, 1.5 P.D. days could actually be entirely open for independent teacher use instead of for long meetings that we sit through dutifully, yet mark or chat at the same time.
Back to the logistics, in each school, each teacher will see about 18 colleagues teach, and be seen by 18 colleagues every year (or 10 or whatever, but for anonymity purposes we'll need at least four visits a semester). Each visit will require a 2-minute form be filled in electronically scoring teachers on whatever qualities the school dreams up (hopefully they're clear and specific and not laden with jargon - and with a clear, criterion-based rubric for evaluators to follow). Feedback will also be required on the form with space to comment on the best part and strategies for improvement. AND there should be space for teachers to be able to comment on the feedback, why they disagree or what they intend to improve. This could also help do away with the PD forms we're supposed to fill out every year.
At the end of each semester (every nine evaluations), the median and range of evaluations and feedback comments for each teacher would be published (for whom to see??) along with the courses taught. Unlike ratemyteacher.com, teachers will never see individual scores from another teacher. (I'm suggesting that to reduce bias, but maybe it's not necessary?) AND a median and range of the scores given by each teacher will also be made public along side the median and range of all teachers compiled together so teachers can be made aware if their standards are significantly different from others and to encourage some calibration.
On top of all this, there should also be a way for some teachers (heads or just those interested) to be released for half days here and there to go to different schools to participate in evaluations and to observe different teaching styles within their own field. That could maintain a consistent evaluation across the board. This all must be done so carefully so as NOT to foster competition between departments or schools. For this reason it might be best for only admin and the individual teacher to see his/her assessment. It has to be part and parcel of the process that we're all collaborating to ensure our teaching is successful.
Teachers could just evaluate within their own department, but in my school, personally, teachers in other fields are doing some pretty cool things, and I'd love to watch, and many teaching strategies are transferable across disciplines. And if teachers just evaluate within their department, they could be just evaluating their friends, which could introduce a bias into it all. Or if everyone in the department loves you to bits except that one person, then it could be glaringly obvious who gave you the 1/5.
The Likely Reception:
Some teachers won't want to be watched. I remember as a student teacher I asked to observe an ESL class. The teacher who taught the course was so nervous because I was there, and she kept asking why I wanted to be there and what I would do with the information. And I was just a student teacher! But, after it happens several times, many teachers will get used to someone dropping in. Ideally it will get easier and become less stressful to have fellow teachers in and out every other week than admin coming in once every three years. And if you don't know who's coming when, then you won't have the build up of knowing it's coming.
Some teachers won't want the burden of evaluating others. Maybe knowing that their time is replaced with open PD days and no VP evaluations would help. Then again, some teachers will want to evaluate others to death. And other teachers won't get why you do anything you do because their methods are radically different from yours. But any extreme scores will be eaten up by showing only medians. Some teachers will blow it off and give everyone the same scores, but that will be caught by tracking everyone's evaluating median and range.
Parents will want to know the results, but it might be best to keep results accessible only to the admin and the teacher in question. Parents can be assured we have an internal structure in place to assess teacher practices on an on-going basis that provides a numerical indicator at the end of every semester, and weak evaluations are followed-up with an intervention process. If parents see the results, it might lead to trying to switch teachers or even schools when sometimes the problem is that the kids just don't want to do the work.
Views? Would you prefer this method to others listed? Does it actually provide accountability? Could it create a better, more motivated group of teachers? I think it can at least serve to reclaim our professional status by allowing US to be the judge of the excellence of our fellow teachers, not admin, nor an outside body, nor student ability.
Then check out Steven Hales post explaining why it's illogical to toss out our current methods due to lack of evidence in favour of a different method also lacking evidence.
ETA - I created a rubric, but added it to the following post on evaluating merit (thinking it was here too). And check out this NY Times article - it suggests a mix: 10% from the school, 30% from student evaluations, and 60% peer evaluations. I could live with that.