Trust, Accountability, Autonomy

Summer 2011

By Carlo DiNota, Hugh Jebson

Faculty evaluation. Arguably no other topic in independent education evokes as much passionate discourse — mostly negative, or at least freighted with anxiety. But, in our experience, it doesn’t have to be this way. At our school, Berkeley Preparatory School (Florida), we’ve recently developed a teacher evaluation model that is broadly inclusive in its development and meaningful for all involved.

In the spirit of collegiality, we offer some of the lessons — and pitfalls — we’ve learned along the way. 

Diagnose and Unfreeze

During the transition from outgoing to new upper division director (the end of the 2006–2007 school year), the information we gathered from the faculty and other school constituents through surveys, individual meetings, informal coffees, etc. pointed toward a strong desire within our community to transform the way we assessed faculty performance and, equally important, promoted professional growth. It was clearly time to retire the old model and start anew. 

In the fall of 2007, our upper division began a formal exploration of the subject of teacher evaluations, hoping to have a new working model ready for fall 2008 implementation. Before establishing a committee, however, the assistant headmaster and a faculty representative appointed by him (the authors of this piece) began an honest “everything on the table” dialogue, which included an initial force-field analysis (essentially weighing the pros and cons of the project), discussions of best practices and research, and possible timelines for completion. Most important, we asked how the new model could promote a meaningful form of faculty evaluation that encourages sincere, helpful, and ongoing feedback necessary for professional growth. 

Ask the Essential Questions

Assuming your school currently has a formal process for evaluating teacher performance, a seemingly obvious but important first step for those interested in reforming the model is to ask colleagues (teachers and administrators) for their views of the existing practice. Our experience in discussing this topic with fellow educators leads us to believe that typical responses may include the following sentiments: “top-heavy,” “adversarial,” “informal and without clear structure and process,” and “no faculty buy-in.” Another important starting point is to gauge colleagues’ understanding of the de facto purpose of evaluating teachers. Is the process intended to do little more than assess basic teacher competency in the classroom? Or is its goal the professional growth and development of the educators at the school? The answers to these questions and others regarding the evaluation model can prove most helpful in guiding the direction of any reform process.

Assess Your School’s Culture

Nothing has more power to inhibit or enable reform or growth than the culture (overt or covert) that operates in all independent schools, and how a school handles teacher evaluation often reflects, sometimes glaringly, an institution’s culture. Some faculty evaluation models, we have found, are one-sided, adversarial, theory-X1 monologues from a bygone era. Some impersonalize teacher performance through lengthy bureaucratic checklists, often with criteria impossible to measure within the constructs of the model. Some are “don’t rock the boat” exercises in the absurd, exclusively focusing on assessing and confirming what an administrator, and often the rest of the school, already knows: the basic competence of the teacher. Yet, these models have a way of hanging around a school year after year, despite how little they promote the actual professional growth and development of teachers. Do they speak to the existing culture in a school? Of course, an evaluation model may not accurately mirror a school’s culture. However, quite often, the model does indicate something about it.

We believe the strongest and most effective models — those that promote professional growth and outstanding teaching and learning — are found in schools where there is a shared sense of ownership for student outcomes. The culture in these schools is one of trust among the various constituents, where accountability is embraced and autonomy supported. 

We also believe that faculty and administrators have a professional obligation to regularly examine the culture that operates in their schools. As difficult as this might seem or as reluctant as we might feel to enter into potentially difficult or contentious conversations, it is only possible to develop a model of evaluation that best meets the identity and needs of individual schools if we are willing to be honest with ourselves as professional educators. The best results will be found if professional educators are able to do this in a spirit of open dialogue and without fear of judgment or, worse still, recrimination. 

For our school, we concluded that, in order to get this complex and vital initiative right, we had to further fortify the bridge of trust and communication established between administration and faculty over the years. We took a detailed look at the factors that largely determine or influence school culture: relations (formal and informal) between administrators and teachers, teachers and students, teachers and parents, etc. A low-transparency culture — for example, with a dominant top-down, highly regulatory ethos — may lead to a model that’s overly mechanistic. A school where collegiality trumps accountability may lead to a model that’s too laissez-faire

Worst of all, if the “blame game” is alive and well in your school — a possibly toxic culture marked by the absence of mutual trust and support between faculty and administrators — then this will be the first and biggest (and arguably the most difficult) obstacle that will need to be overcome. 

Having taken part in our school’s rigorous process of self-reflection about culture, we must emphasize and reiterate the following: Mutual trust, respect, and the enthusiastic sharing of high but attainable growth-centered goals between teachers and administrators are essential for a faculty evaluation model that is authentic and meaningful. It can be helpful to remember and remind the constituents of the goal of evaluation: the ongoing professional growth of highly skilled and dedicated educators. This is achievable when those being evaluated and those implementing the process see their relationship not as “boss and worker” but as one of different but equally important roles in the success of the school mission. 

Promote Participative Leadership

At Berkeley, our goal was to end up with a model and process that was transparent, reflected the positive culture operating at our school, and was driven from the “ground-up” by our faculty members. 

To this end, we formed an eight-member faculty committee (administrators were not involved at this initial stage) to take the lead in designing our new model. It comprised individuals from across the experience spectrum — from new teachers to veterans, each bringing a unique perspective. The committee did due diligence; it brainstormed, studied the available research on teacher evaluations, explored various models, and asked probing questions. The committee then informed the administration almost two months into the process that it was moving toward a three-year rotation model — a model promoted favorably in NAIS publications over the years. Generally, the model would involve a first year of self-reflection and individual goal setting, a second year of peer observations, and a third year of formal evaluation. 

Develop Your Own Criteria 

But what about the criteria? What should, for example, an administrator and peer observer look for in a classroom visit? In a formal evaluative year, what criteria should be used to determine professional excellence? 

Under the aegis of the faculty committee, the entire faculty met in small groups to brainstorm “elements of exceptional teaching” and “elements of professional excellence.” What emerged from these sessions were core principles and pedagogical values essential, in the opinion of our faculty, to creating the best possible education environment for Berkeley students. The teacher-designed criteria, in fact, are the backbone of our new model. So, when a teacher is being observed by a peer or an administrator, or when an administrator is composing a summative “year three” formal evaluation, the criteria is based on what Berkeley teachers deem essential characteristics of excellence in and out of a Berkeley classroom. 

Administrators were brought in to the process, of course, and what transpired was something akin to a legislative committee. The administration, for example, made some recommendations regarding the number of peer observations and some criteria they hoped to see in the new model — and the committee considered them. How many peer observations? Will peer observations be restricted to within the upper division? How many classroom observations? What would the role of department chairs be during evaluations? A healthy, good-faith back-and-forth between the administration and faculty committee was crucial to this entire exploration process. 

Make It About Growth

When the three-year rotation model — which we call The Berkeley Professional Growth and Development Program — was unveiled at the end of the 2007–2008 school year, we felt confident that this new model had significant faculty buy-in (thanks to extensive faculty input). There is autonomy built in for teachers to design their own growth strategies and partake in meaningful self-assessment, while also maintaining an ongoing conversation with department chairs and administrators about individualized professional growth. There is authenticity via confidential peer observations, where teachers are encouraged to have frank discussions with one another about improving classroom performance. There are criteria that could, in fact, really be observed by an administrator and a peer observer during a visit. There is a dynamic model now in place reflecting that we are, to borrow a phrase from Harvard professor Richard Elmore, a “community of practice,” or, to borrow a phrase from the business schools, a “learning organization” — i.e., we value such things as collaboration, lateral accountability, innovation, versatility, the integration of new knowledge, and explicit norms. 

Provide Holistic Feedback

In order to provide holistic feedback to teachers on their performance during the “year three” summative evaluation period, information is gathered from several individuals in addition to the lead administrator. These include, but are not limited to, the respective department chair, the dean of students (support for school rules and standards), the director of student activities (involvement in extracurricular duties), the academic counselors (communication with parents), and the registrar (timely completion of professional tasks such as student grades and written reports). The purpose is twofold. First, the faculty committee charged with drafting our model felt it imperative to give feedback on the full set of responsibilities associated with being a professional educator at our school. Accountability in these areas, the committee maintained, is equally as important as accountability for what goes on in the classroom. Second, administrators were keen to make it evident that performance “evaluation” is not the purview of one individual (the lead administrator), and determining commendations, recommendations, and required improvements for each faculty member is not “personal” or personalized. One positive byproduct of the comprehensive evaluation document — generated by the division director on behalf of other colleagues — is best summed up by this comment made by a “year three” teacher: “This document really captures who I am and what I do.” 

Don’t Forget the Kids

In keeping with the multisource, 360-degree feedback integrated throughout the model, anonymous student surveys of teachers are administered twice a year. The open-ended and close-ended questions were designed by teachers, students, and administrators. The 25 questions are administered online and the results — complete with percentages, bar graphs, and student commentary — are emailed to faculty. Contrary to many anxieties often expressed by teachers everywhere when the issue of student surveys is raised, we have found our student responses to be serious and thoughtful, and their recommendations on what a teacher can do to improve the overall classroom learning experience are reasonable, specific, and helpful.

Model as Living Entity

The model is subject to annual review and amendment. In the past two years, we have enhanced and improved the new model, staying focused on our core belief that faculty evaluation should be primarily intended for advancing professional growth. We have added confidential peer observations into “year one” self-reflection — teachers in this year will observe a peer’s classroom four times. This will complement the peer observations that were already in place for “year two,” when colleagues observe the respective teacher. We have also requested that the peer observations not be restricted to a department, but that they also involve faculty in other departments and other divisions. Branching out and observing, communicating, and sharing with educators outside of your discipline, division, and “comfort zone” is an energizing, highly valuable learning exercise.

In the past year, we have also phased out announced, full-period administration observations, thanks, in part, to the extensive research in this area by Kim Marshall — a former school principal and author of The Marshall Memo, a popular weekly newsletter on educational leadership. According to Marshall, mini-observations of five to eight minutes — unannounced, any time, systematically cycling through all teachers — provide the best “representative slices” of what’s going on in the classroom. It allows the administrator to visit more classes, more often, and it assumes the day-to-day professionalism of the instructor. The “dog-and-pony show,” so common in announced, full-period observations, often does not reflect reality. And we should all ask ourselves this question: Do administrators really need 40 minutes to assess the quality of teaching and learning going on in a classroom? Isn’t freeing up an administrator from often unnecessary full-period observations a good thing? How many classes could we reasonably expect an administrator to visit under a full-period, announced model? 

Share It

We keep our professional growth and development document online in a public folder for all in the Berkeley community to peruse and access. We have explained the model in great detail to parents at one of our monthly coffees. We presented it at a recent Florida Council of Independent Schools annual convention. We’re proud of it, and we know that the more eyes who read it, the more input we’ll receive on how to improve it. 

Make It a Custom Fit

For many schools, the pursuit of a means of holistic evaluation of faculty that meets the needs of all the constituents (faculty, administration, students, and, yes, parents) is akin to the search for the “holy grail.” We believe we have found something that works well for us. However, the Berkeley Professional Growth and Development Program is not a one-size-fits-all model, and it may not be right for your school. Only an honest period of top-down, bottom-up, highly collaborative self-reflection about your current evaluative system — which must not shy away from tackling such sensitive matters as culture and the level of trust that currently exists between the administration and faculty — can determine the model that best fits your institution.

Note

1. Theory-X, a term developed by Douglas McGregor at the MIT Sloane School of Management in the 1960s, assumes that workers are essentially lazy and need close supervision and carrot-and-stick motivation. It’s generally considered counterproductive today.
Carlo DiNota

Carlo DiNota is a member of Berkeley’s English department.

Hugh Jebson

Hugh Jebson is the assistant headmaster and upper division director at Berkeley Preparatory School (Florida).