A Sense of Hope for CT Education
Listening to today’s news on the car ride to school with my son, a sense of tremendous optimism for CT education came upon me. In a decision that could fundamentally reshape public education in Connecticut, the state was ordered on Wednesday to make changes in everything from how schools are financed to which students are eligible to graduate from high school to how teachers are paid and evaluated. My son became my initial researcher during our car ride, looking up articles and organizing an outline for this post (real world instruction for sure). While all elements of the court’s decision are indeed “fundamental” to reshaping CT Education, due to my investment in educator evaluation and my organization’s work in over 48 CT districts and in four different states, the last element of the court decision generated the greatest sense of hope.
According to a NY Times article, “The judge…criticized how teachers are evaluated and paid. Teachers in Connecticut, as elsewhere, are almost universally rated as effective on evaluations, even when their students fail. Teachers’ unions have argued that teachers should not be held responsible for all of the difficulties poor students have. And while the judge called those concerns legitimate, he was unconvinced that no reasonable way existed to measure how much teachers managed to teach.”
What needs to happen now is taking this opportunity to address the design to educator evaluation originally presented to districts and provide better training and support to improve implementation of educator evaluation by CT evaluators.
What’s Gotten in the Way
The question that we all need to be asking is what has gotten in the way, previously and over the past four years, in the creation of and implementation of educator evaluation. To be clear, this is not one of those simple attempts that you often see in blog postings to assign blame. Instead, I sit to write today to highlight three primary reasons we are in need of this change that hopefully can provide guidance on the new path towards the “reasonable ways to measure how teachers manage to teach” and “how educational leaders manage to lead”.
Reason One: We started with an ill-conceived definition of evaluator capacity.
As the State Department and districts began to implement guidelines from the Performance Evaluation Advisory Council (PEAC), they worked from the premise that if an evaluator could “accurately assess teaching practice” they would be able to support teacher effectiveness and improvement. This is no fault to them since they were simply going on the research and literature they had at the time, mostly the Measures of Effective Teaching (MET) studies. The fundamental flaw is that in no way did these studies examine how the accurate assessment of practice and the corresponding training models would turn “accurate evidence” into feedback to ensure growth for a teacher. These studies have since expanded the definitions of evaluator capacity and PEAC needs to consider this type of new information to restructure how we are defining evaluation guidelines.
I begin with this reason because much to their credit, the CT State Department of Education has already taken steps to change what it means to be an effective evaluator. The Talent Office has introduced a new training model for evaluator capacity that focuses on feedback for learning rather than inspection of practice.
The greatest impact will come from the expansion of this definition of evaluator capacity. Evaluators need to be measured not only on how accurate they are but also on their ability to…
- observe for and collect specific evidence,
- align that evidence to the teacher performance expectations outlined in the CT Common Core of Teaching,
- focus evidence collection on the impact of the teacher on student learning both in the moment and over time, and,
- organize that evidence into objective, actionable feedback that can ensure teacher growth.
This is the intent of the CT State Department of Education in making the change and I applaud them for that effort. The concern of course is that the recent funding issues for the state of CT may in turn have an impact on the reach of these services to the districts. The change is underway, however, and with the support of policy-makers, we can continue to ensure that every teacher has access to a high quality evaluator who can provide feedback for learning.
It is important to note that this capacity discussion applies to those who are evaluating our building based leaders as well. Remember, we provide supervision and evaluation to our leaders (more often than not these are the evaluators of teachers) through our educator evaluation model as well. Aligning our training models for evaluators is an absolute must if we wish to experience a better evaluation program overall.
In addition to changes in the training and development of our evaluators, we need to give careful consideration to the number of teachers we are asking a building based leader to evaluate. At times, this number can reach up to 30 teachers being evaluated which given the complexities of the work, is not, to put it in the court’s words, part of a “reasonable way to measure how teachers manage to teach”. Legislation needs to support the State Department of Education in careful examination of the structures and policies that ensure that evaluators can provide deep, impactful feedback.
Reason Two: We are applying inaccurate and sometimes all together invalid data when we connect teacher practice to student outcomes through Student Learning Objectives.
The discrepancy in teacher ratings and student performance cited by the CT Judge are the direct result of two flawed approaches in analysis of student achievement through the existing educator evaluation model. First, as stated, evaluators need better training to ensure that their measurement of classroom practice includes a quality analysis of practice while focusing on student learning. Overinflation still occurs in our rating of a teacher’s performance in a classroom (which constitutes 40% of a teachers overall score) because the evaluator rating the teacher is not equipped (based on time or skill) to complete the task effectively. What we have also seen, however, and I am certain that the data the State is looking at can verify this, is that even when an evaluator is assessing practice rigorously and classroom performance is rated below a proficient level, a less than rigorously designed and, once again, more than likely invalid set of Student Learning Objectives (SLO) which constitute 45% of the teacher’s overall evaluation inflates the scores.
Simply put, it is either a poorly assessed and invalid set of data provided by the evaluator about classroom practice (40%) or an invalid SLO (45%) or some combination of the two factors that is creating the discrepancy. Take for example the following SLO one might see in a teachers plan:
Other details about what elements of the reading assessment constitute “reading skills” are provided in the plan, however, the real issue comes in how the teacher rating is calculated based on student performance of this goal. Let’s go on the idea that this elementary level teacher has 30 students in their class. Once results are in, student performance is reviewed based on a locally driven formula. Based on the number of students achieving one-year’s growth, the teacher receives a rating of Below Standard through to Exemplary (1-4). Typically, a percentage is applied to the percentage of students identified in the SLO. In other words, if 100% of the 80% of the students make one year’s growth, the teacher receives an Exemplary rating (that would be 24 students out of the 30 in class make goal). A “Proficient” teacher would fall into the range of 75% – 80% of the students meeting goal (that would mean at least 19 of the students met goal). So, in this situation, 11 students can go without one year’s growth and the teacher will still receive a “Proficient” level rating for 45% of their overall evaluation. Even if the evaluator has rated the teacher’s performance and practice (40%) at a “Developing” range, the teacher who has 11 students out of 30 students not meeting one-year’s growth on key reading skills will be deemed overall to be “Proficient.” This is one of the key reasons we see discrepancies in the data throughout all the states.
In any situation, it is the design and implementation of the evaluation model that comes into greatest question, not necessarily the idea of using student achievement in the evaluation of an educator.
Potential solutions lie in designing whole school or grade level/subject goals in which all teachers and educators in the school are tied to overall achievement levels of students in alignment with the strategic needs of the district. Additionally, developing district capacity to align, design, and analyze assessments to specific outcomes of learning for a teacher’s students needs to be a focus. Reliance on a single standardized assessment not only is flawed in that it cannot adequately represent teaching quality but the structure and implementation of SLOs still leaves too many students behind. Moving to grade level or whole school goals has its own flaws that still need to be considered, however, at least we will ensure that we are promoting the village’s responsibility and not just the individual in ensuring the success of our students.
Reason Three: We have not made learning the true objective of educator evaluation.
One of the reasons this court case includes a decision about educator evaluation is that we (adults) have not viewed evaluation as an opportunity for learning. The idea of being evaluated by someone else is often met with skepticism or downright mistrust in the purpose. Old paradigms of “us versus them” or the belief that somehow coaching cannot happen through the evaluator role are at the center of this thinking and needs to be confronted.
The CT State Department of Education through PEAC needs to make changes to the policy and structures to the educator evaluation model – the way in which we define evaluator capacity and the way in which student outcomes are designed and measured. Fundamentally, they also need to engage in a dialogue about learning, clearly outlining the values and beliefs we hold as educators about our own growth mindset and willingness to learn and the notion that each student’s growth is not only needed but expected.
Leave a Reply