In creating the Science and Mathematics Teaching Imperative, APLU leaders were faced immediately with questions of “what to do” by the presidents and chancellors of some of America’s leading public research institutions who had agreed to participate in the effort. University leaders were willing to commit to produce more and better teachers of science and mathematics, but they and their deans and faculty wanted guidance and a common framework to guide their decision-making about program improvement to meet the SMTI long-term goals of driving up the quantity, quality and diversity of the science and mathematics teacher education programs. They wanted to know the specific strategies that would produce these results. And although there were prominent efforts which had some evidence of promise or success – such as UTeach, the Woodrow Wilson Fellows Program, and Math for America – not all university leaders were convinced that these models would be the most appropriate for their particular campuses. These same leaders also wanted greater assurance, beyond achieving state and national accreditation, that programs developed by their faculties were coherent and effective.
In the face of these realities and in the absence of a common tool, the developers of what has come to be known as the Teacher Education Program Assessment (formerly known as the Analytic Framework) began the process of creating a classification, almost a taxonomy, of the critical components, goals, objectives and strategies that codify a shared language of concepts, strategies and assessments that are particular to science and mathematics teacher preparation.
The Teacher Education Program Assessment: Assessing Innovation and Quality Design in Science and Mathematics Teacher Preparation is intended to be a useful tool in creating and achieving greater program coherence – and providing greater assurance that program completers will possess sufficient knowledge and skills to teach effectively. The creators of the AF do not claim that it is inclusive of the kind of “signature pedagogy” for science and mathematics teacher preparation for which Lee Shulman (2005) has so eloquently advocated. However, the authors have developed this tool with a strong belief that the response to the question asked by Linda Darling-Hammond – “Can universities prepare teachers well?” – is yes! But, universities can and must improve their programs to produce greater consistency in positive outcomes. Our experience in developing this tool confirms Darling-Hammond’s research (2006) that, “…it is possible (if not easy) to create the context for high quality teacher education within even the most resistant institution: the research university.”
TEPA was developed with significant input and critique by noted research scholars and education leaders from across the nation. It is designed to a be a shared tool that enables P–20 educators, teacher educators, content specialists, researchers, campus leaders, and policymakers to carefully assess the design of science and mathematics teacher preparation and inservice professional development programs. The framework will enable users to understand the rationale behind and evaluate the strength of various program components, and to compare programs and specific program features within and across states – and thereby learn from one another and motivate program improvements. As it continues to evolve and as the evidence of success for program strategies is collected and shared, TEPA may become a normative tool for identifying leading or promising practices in teacher preparation and development.
The white paper, Developing the Analytic Framework: A Tool for Supporting Innovation and Quality Design in the Preparation and Development of Science and Mathematics Teachers, details the genesis and development of the Teacher Education Program Assessment. It includes an overview of purpose, structure, and next steps.