Blog: Idea Exchange

Seeing the Challenge as an Opportunity: CAEP Evidence and Educator Preparation

Andrea Barra September 25, 2015

I recently attended the fall 2015 Council for the Accreditation of Educator Preparation (CAEP) conference in Washington DC and it was clear to me that the old ways of doing accreditation had changed. Not only was there a new sheriff in town (interim president Chris Koch), there was a new format for presentations that was much more educator preparation program (EPP) focused. Institutions shared best practices for anything from clinical partnerships to dispositions assessment to ethical teaching to student perception surveys. This shift away from sessions based solely on standards seems reflective of CAEP’s emphasis on allowing EPPs to have the freedom to best determine which evidence demonstrates satisfaction of the standards. The change, while scary for some who are used to having a straightforward path to accreditation, is truly an opportunity for EPPs to do the rewarding work of understanding and presenting what they do best.

A clear message that I received from Dr. Koch’s opening remarks was that accreditation shouldn’t be a “checklist”; that is, the process of determining whether or not an EPP meets the standards isn’t about satisfying a series of requirements that are disconnected from the work done on a daily basis. Instead, it should be a process of discovery and intentional information gathering that allows an EPP to see its strengths and challenges clearly, presenting evidence along the way to make the case. This is actually great news for teacher prep programs who have felt in the past that the accreditation process does not reflect the difficult and amazing work their units are undertaking. That means now is the chance to shine.

When I was a School of Education assessment coordinator assisting in the accreditation process, there was often consternation amongst my faculty members that the data needed to prepare the self-study felt distant from their regular work with candidates, fieldwork, and P-12 students. In order to “fill in the blanks” for the accreditor, they had to focus on narrow measures that didn’t encompass the great diversity of our candidates’ experiences and the uniqueness of our EPP’s place in urban education—in other words, we couldn’t show what made us special. The broader scope of acceptable evidence now encouraged by CAEP would allow an institution to highlight such distinctions.

Another endorsement for the less prescriptive execution of the standards came from a CAEP-led session regarding possibilities for presenting evidence. Participants were shown two examples of data sets that could be applied to make a case for candidates’ use of research and evidence (Standard 1.2). The takeaway point to EPPs was to be thoughtful about evidence selection. Are closed-ended survey questions a weaker example than open-ended interview questions? Is a chart that lists specific research-based practices a stronger example than a checklist of courses? Is there a difference between gathering and presenting quantitative and qualitative data? From my perspective, these are vital questions to explore. We all know that not all data are created equal. By considering the quality and depth of evidence when choosing how to demonstrate proficiency on the standards, EPPs can only get stronger.

It is understandably daunting, however, to not have a very specific roadmap to follow. By turning the focus away from CAEP itself and on to individual EPPs, participants at the conference were able to gather ideas and information from other practitioners and find methods they could bring back to their campuses. This connects real-life, concrete examples to the standards in a way that can be appreciated beyond the context of a particular institution. When attending conferences, I am always encouraged and energized by listening to the variety of work being undertaken by EPPs across the country. Great ideas can spark other great ideas!