Research Committee Review: “Evaluating the Outcome of Clinical Pastoral Education: A Test of the Clinical Ministry Assessment Profile”
By Moses Taiwo, ACPE Certified Educator | January 7, 2019
Happy New Year, ACPE! The Research Committee
members continue to take turns every two weeks to share with you insights gained from an assigned article. The goal is to inspire your interest in research that demonstrates effectiveness of CPE programs, such as students’ readiness for ministry and professional functioning. This seventh article review is about the Clinical Ministry Assessment Profile (CMAP), an instrument designed to measure the effects of CPE on professional practice. In case this review awakens your interest, the citation is George Fitchett and George T. Gray (1994). Evaluating the outcome of clinical pastoral education: a test of the clinical ministry assessment profile. Journal of Supervision and Training in Ministry
, 15, 3-22.
Evaluating the Outcomes of Clinical Pastoral Education is not new because we often hear personal testimony of participants (students and educators alike) on the profound impact the experiential learning has had on their individual growth and ministry! There are several quantitative studies that exist which measure if a CPE unit or program has made any significant impact in the participants’ character or personality, and their overall professional functioning. For example, Paul E. Derrickson reviewed 136 articles, and 39 showed how different metrics used to measure CPE outcomes indicated changes in the participants’ personal growth and professional functioning [Derrickson, P.E. (1990). Instruments used to measure changes in students preparing for ministry: a summary of research on clinical pastoral education students. The Journal of Pastoral Care
, 4, 343-356]. In conjunction with ACPE, the CMAP was developed by the Association for Theological Schools (ATS) which accredits graduate theological education in the United States and Canada. The instrument was built on ATS Director David Schuller’s Readiness for Ministry
(1975) and made available in 1983 for use by CPE supervisors (and now educators). The CMAP was designed to identify and rank areas for individual learning, and determine whether growth in identified areas had occurred.
CPE Supervisors and Students from five regions read and evaluated the 375 items or areas originally from the Readiness
instrument, but they could add items to the list that were significant to them. With 101 additions, a nationwide sample of 349 participants then ranked attitudes, behaviors, and personality traits in terms of their importance for ministry. The use of factor and cluster analysis on the data revealed 64 dimensions that were a bit different from the 64 listed in the Readiness
study. The CMAP tool has 80 items, each on a six-point scale, that were grouped into 26 sub-scales. This was further grouped into eight major areas describing the person and functions of the individual clergy. It appears that, to date, Rush University Medical Center in Chicago has successfully used the CMAP tool to study 33 CPE students who had completed a yearlong program. Despite its small sample size and without a control group (like non-CPE in seminaries) for comparison, the research effort still showed positive changes both in the pre-and post-test CPE CMAP scores. The change reported on ten of the sub-scales was statistically significant at the .01 level. For a detailed analysis of their findings, please see, Fitchett and Gray (1994), cited above and; also, George Fitchett’s (2008) unpublished preliminary report which he presented at the 1996 ACPE Annual Meeting in Buffalo, NY, entitled, Evaluating the Outcome of Clinical Pastoral Education: The Clinical Ministry Assessment Profile Study.
The question then is, if the CMAP instrument was deemed useful for evaluating changes taking place in CPE students over the course of a year, why is there a decline in the use of the tool? The two reasons advanced by Fitchett and Gray were that “administering the test was time consuming, and a program was often half over before initial results were available. Further, the reports provided an overwhelming amount of information, without any summary of its implications for supervision and no normative data.” (p. 17). But, how accessible is CMAP for use by CPE educators and researchers today? Are there other instruments that could capture the desired CPE Outcomes for curriculum development, beside CMAP? Your suggestions and ideas are welcome!