I am often approached by people who have developed a mindfulness-based or contemplative program and want me to help them prove its efficacy. These conversations both hearten and dismay me. I’m heartened by the inspiration to help children and the ambition to build an evidence-based program. And I’m also discouraged by the lack of wherewithal to do the necessary research, both in terms of financial resources and the knowledge needed for the developer/researcher partnership to work. I’m in a position to understand and empathize with the dilemma because I’ve seen it from both sides. As a former teacher and teacher educator I’ve developed programs myself. Later, I was trained as a scientist and now develop and research the effects of these programs in educational settings. I’m pleased that the Garrison Institute will be hosting an event that will bring both perspectives together in the interest of enabling more of these programs to be studied. “Contemplative Teaching and Learning: Strengthening Evidence-Based Practice” is a one-day workshop held in Denver on April 26, the first day of the inaugural International Symposium for Contemplative Studies co-sponsored by the Mind and Life Institute. The workshop will be led by Dr. Mark Greenberg, Director of the Prevention Research Center at Penn State University, and myself, and we’ll be joined by a panel of other educators and scientists who either develop school-based contemplative programs for students and teachers or test their efficacy, or both. The workshop will guide developers of contemplative education programs through the first steps towards planning a study, including thinking out their programs’ underlying logical model or theory of change, and developing a testable hypothesis. The process begins by asking, “What change do I think my program is effecting? Why? For whom?” “What are the most immediate outcomes I observe and what secondary outcomes do these lead to?” For example, suppose a mindfulness-based intervention designed for elementary school-age students focuses on increasing self-awareness and self-regulation. That, in turn, may lead to a calmer classroom atmosphere where more learning can occur. That’s a hypothesis that can be tested by various research methods. It’s not necessary to start with a randomized, controlled trial (RCT); in fact, it’s not advisable. RCTs are expensive, and in the beginning it’s often unclear what types of measures will ultimately work. Early stages of program development are an iterative process combining trial and error with various forms of qualitative data collection. Focus groups, journals, interviews, and evaluation surveys can help a program developer and researcher flesh out the hypotheses that will help develop a measurement model for future research. The next stage involves refining and testing this model in small pilot studies. Only when a program has enough evidence to show promise that a full RCT is warranted and the costs are justified, should one be planned. The RCT is called “the gold standard” because it’s the only way we have to prove that a program or intervention is the actual change agent. Researchers randomly assign half of the schools, students or teachers in the study to receive the new program (the intervention group) while the other half (the control group) doesn’t receive it. Each group is measured to see how they score on hypothesized outcomes, both before and after the program is delivered to the intervention group. If the scores of the intervention group are significantly different than those of the control group, it’s proof that the program is responsible for the change. That’s the basic design of the new research on the Garrison Institute’s CARE for Teachers program in New York City schools, which is being funded by a nearly $3.5 million grant from the US Department of Education Institute of Educational Sciences (IES) through our partnership with Penn State University. We’re very excited about it, but program developers should note: it took us years of background research, thought, collaboration and publication before we were in a position to receive this level of federal support. In 2009, Mark Greenberg and I published an article presenting the “Prosocial Classroom” model, highlighting the importance of teachers’ social and emotional skills and well-being in relation to desired classroom and student outcomes. Building on the model, my colleagues Christa Turksma, Richard Brown and Kari Snowberg created the CARE for Teachers professional development model, combining mindful awareness practices and emotion skills training in a way that is specifically targeted to the demands that the classroom climate presents. With a first, smaller grant from IES, we spent three years refining and testing our intervention and measurement model. In a pilot RCT, we found that CARE reduces stress, promotes well-being, efficacy, and mindfulness. Qualitative data collected from focus groups suggested that CARE helps teacher provide social, emotional and academic support to their students and improve the classroom climate. We published those results. This work put us in a position to apply for the new IES grant to conduct a large RCT. The new research aims to replicate the effects on teacher outcomes of the smaller pilot study, and to demonstrate effects on classrooms and students. If we succeed in showing those effects, then and only then will we have proven that CARE is effective in doing what we designed it to do: improving outcomes for teachers and students. It has been a long and arduous process, but there’s no shortcut or substitute for it. Advancing a field like contemplative education rightly demands evidence and rigor. We’ve been at the forefront of the field, and now we’re happy to be sharing knowledge of how to build the evidence base for it more widely at the Denver workshop. We thank our partners at the Mind and Life Institute for co-sponsoring it and the 1440 Foundation for supporting it.