Document Type : Letter to Editor
Author
Education Development Center, Department of Medical Education, Health Professions Education Research Center, Tehran University of Medical Sciences, Tehran, Iran
Abstract
In a young field like educational programevaluation, it is inevitable that conceptualframeworks such as Kirkpatrick model arerevised with time and with greater knowledge.The New World Kirkpatrick Model (NWKM)is the new version of Kirkpatrick model whichis more welcome to context and process, andhence probably much closer to the context–input–process–product (CIPP) model (1). The aimof this paper is to explore the similarities anddifferences between three well-known evaluationmodels including the original and new versionsof Kirkpatrick model and CIPP model.The original version of Kirkpatrick model is anoutcome-focused model evaluating the outcomesof an educational program, for instance, in the fieldof medical education, in four levels of reaction,learning, transfer and impact, respectively (2). Themodel is rooted in reductionist approach suggestingthat the educational program success or lack ofsuccess can be explained simply by reducing theprogram into its elements and examining them (i.e.its outcomes) (3). Yet, Kirkpatrick’s original modelfails to provide the evaluators with an insightinto the underlying mechanisms that inhibit orfacilitate the achievement of program outcomes(4). In response to this shortcoming, the newversion of Kirkpatrick model added new elementsto recognize the complexities of the educationalprogram context (5).The most highlighted changes have beenoccurred at Level 3 to include processes thatenable or hinder the application of learnedknowledge or skills. The required drivers thatreinforce, monitor, encourage, and rewardlearners to apply what is learned during training,on the job learning that happens outside theformal program and Learners’ motivation andcommitment to improve their performance on thejob are interfering factors that may influence thegiven outcomes at level 3. Learners’ confidenceand commitment, and learners’ engagement andsubject relevance ware added to Level 2 and level1, respectively, to broaden the scope of evaluationat these two levels (5).Although the NWKM appears to betterembrace the complexity of educational programs,some investigators may declare that it would besimilar to CIPP evaluation model. I suppose thatthere are some fundamental differences betweenthem. The CIPP model stems from the complexitytheory that takes into account the educationalprogram as an open system with emergentdynamical interactions among its componentparts and the surrounding environment. As aresult, CIPP pays explicit and implicit attentionto the program context by considering contextevaluation as a separate component of fourcomplementary sets of evaluation studies, aswell as identifying the contextual factors inother components of the model by employinga variety of qualitative methods (6). On theother hand, the NWKM is limited to measuringsome confounding factors such as learnercharacteristics or organizational factors onprogram outcome achievement (1).Kirkpatrick, like many traditional programevaluation models, focuses on proving something(i.e. outcome achievement) about a program. Thus,it is usually conducted at the end of the program.CIPP, on the other hand, acknowledges programimprovement, so providing useful informationfor decision makers during all phases of programdevelopment even when the program is still beingdeveloped (7). The NWKM has broadened thescope of traditional model by adding someprocess measures enabling evaluators to interpretthe outcome evaluation results, but with the aimof proving an educational program.Overall, notwithstanding some improvement,NWKM has still some theoretical differenceswith the CIPP model resulting in variedmethodological and practical preferences.However, it is not unexpected to witness moreconvergence around these evaluation models withgreater knowledge and experience in the future.
- Moreau KA. Has the new Kirkpatrick generation built a
- better hammer for our evaluation toolbox? Med Teach.
- ; 39:999 â1001.
- Kirkpatrick DL, Kirkpatrick JD. Evaluating training
- programs: the four levels. San Francisco (CA): Berrett-
- Koehler; 2006.
- Frye AW, Hemmer PA. Program evaluation models
- and related theories: AMEE Guide No. 67. Med Teach.
- ; 34:e288â99.
- Parker K, Burrows G, Nash H, Rosenblum ND. Going
- beyond Kirkpatrick in evaluating a clinician scientist
- program: itâs not âif it worksâ but âhow it worksâ. Acad
- Med. 2011; 86:1389â96.
- Kirkpatrick JD, Kirkpatrick WK. Kirkpatrickâs four
- levels of training evaluation. Alexandria (VA): ATD
- Press; 2016.
- Mirzazadeh A, Gandomkar R, Mortaz Hejri S,
- Hassanzadeh GhR, Emadi Koochak H, Golestani A,
- et al. Undergraduate medical education programme
- renewal: a longitudinal context, input, process and
- product evaluation study. Perspect Med Educ. 2016;
- :15â23.
- Gandomkar R, Jalili M, Mirzazadeh A. Evaluating
- assessment programs using program evaluation
- models. Med Teach. 2015; 37: 792â3.