The fall 2012 issue of Education Next featured an article called “Can Teacher Evaluation Improve Teaching?” in its Fall 2012 issue. The article, which can be found here, discussed an evaluation study that was conducted in a Cincinnati school district between the years 2003-04 and 2009-10. In the study teachers were evaluated by administrators and peer observers who had received training in an intensive observation program. The teachers were given critical feedback that covered classroom management, instruction, content knowledge, and planning. The evaluators followed a rubric based on Charlotte Danielson’s Enhancing Professional Practice: A Framework for Teaching, “which describes performance of each skill and practice at four levels: ‘Distinguished,’ ‘Proficient,’ ‘Basic,’ and ‘Unsatisfactory.'” Though the teachers were given written feedback after each observation and met with the evaluators, there was a final summative score at the end of the school year that came with explicit consequences:
For beginning teachers (those evaluated in their first and fourth years), a poor evaluation could result in nonrenewal of their contract, while a successful evaluation is required before receiving tenure. For tenured teachers, evaluation scores determine eligibility for some promotions or additional tenure protection, or, in the case of very low scores, placement in a peer assistance program with a small risk of termination.
There is a lot of talk in education concerning how to evaluate teachers properly and I think that the evaluation of tenured teachers is looked over or omitted from the conversations more often than not. While this study did not say that tenured teachers were in the clear from consequence, it is reassuring to a prospective teacher to know that teachers with any amount of experience will be evaluated and it is a reminder to tenured teachers that the students are changing, the classrooms are changing and the techniques need to be changing with them. The study found that teachers did better during the year that they were evaluated than the previous years…did anybody really need a study to tell them that? Of course teachers did better during the year that they were evaluated! They were being given feedback periodically for a year. They worked on modifying their techniques and the students reaped the benefits of that kind of feedback and follow up.
But that wasn’t the point of this study, I just thought that tidbit of information was funny. What I found interesting about this study was that the results were better in math than in reading. That is, students math scores were going up during the year of evaluation and the following years much faster than reading scores. Does that take away the validity of this kind of evaluation? No, not at all. It does, however, make me wonder how to effectively evaluate reading and writing teachers in a way that will help students boost their scores. What this study showed is that being evaluated regularly can make a difference in teaching, the open questions that resulted from it are these: How can teachers be evaluated regularly in a way that is beneficial to the teacher and students and cost effective for the school district? Is it really necessary, or can teachers be motivated in different ways? Does the use of peer evaluators take away from students who might otherwise have those skilled teachers?
I still don’t know how I feel about the increasing accountability on teachers (the level of realistic and appropriate accountability being placed on teachers is getting muddled), but I do agree that evaluation can lead to lasting changes in teaching method and is beneficial to students.