TY - JOUR ID - 41014 TI - Comparing Kirkpatrick’s original and new model with CIPP evaluation model JO - Journal of Advances in Medical Education & Professionalism JA - JAMP LA - en SN - 2322-2220 AU - GANDOMKAR, ROGHAYEH AD - Education Development Center, Department of Medical Education, Health Professions Education Research Center, Tehran University of Medical Sciences, Tehran, Iran Y1 - 2018 PY - 2018 VL - 6 IS - 2 SP - 94 EP - 95 DO - 10.30476/jamp.2018.41014 N2 - In a young field like educational programevaluation, it is inevitable that conceptualframeworks such as Kirkpatrick model arerevised with time and with greater knowledge.The New World Kirkpatrick Model (NWKM)is the new version of Kirkpatrick model whichis more welcome to context and process, andhence probably much closer to the context–input–process–product (CIPP) model (1). The aimof this paper is to explore the similarities anddifferences between three well-known evaluationmodels including the original and new versionsof Kirkpatrick model and CIPP model.The original version of Kirkpatrick model is anoutcome-focused model evaluating the outcomesof an educational program, for instance, in the fieldof medical education, in four levels of reaction,learning, transfer and impact, respectively (2). Themodel is rooted in reductionist approach suggestingthat the educational program success or lack ofsuccess can be explained simply by reducing theprogram into its elements and examining them (i.e.its outcomes) (3). Yet, Kirkpatrick’s original modelfails to provide the evaluators with an insightinto the underlying mechanisms that inhibit orfacilitate the achievement of program outcomes(4). In response to this shortcoming, the newversion of Kirkpatrick model added new elementsto recognize the complexities of the educationalprogram context (5).The most highlighted changes have beenoccurred at Level 3 to include processes thatenable or hinder the application of learnedknowledge or skills. The required drivers thatreinforce, monitor, encourage, and rewardlearners to apply what is learned during training,on the job learning that happens outside theformal program and Learners’ motivation andcommitment to improve their performance on thejob are interfering factors that may influence thegiven outcomes at level 3. Learners’ confidenceand commitment, and learners’ engagement andsubject relevance ware added to Level 2 and level1, respectively, to broaden the scope of evaluationat these two levels (5).Although the NWKM appears to betterembrace the complexity of educational programs,some investigators may declare that it would besimilar to CIPP evaluation model. I suppose thatthere are some fundamental differences betweenthem. The CIPP model stems from the complexitytheory that takes into account the educationalprogram as an open system with emergentdynamical interactions among its componentparts and the surrounding environment. As aresult, CIPP pays explicit and implicit attentionto the program context by considering contextevaluation as a separate component of fourcomplementary sets of evaluation studies, aswell as identifying the contextual factors inother components of the model by employinga variety of qualitative methods (6). On theother hand, the NWKM is limited to measuringsome confounding factors such as learnercharacteristics or organizational factors onprogram outcome achievement (1).Kirkpatrick, like many traditional programevaluation models, focuses on proving something(i.e. outcome achievement) about a program. Thus,it is usually conducted at the end of the program.CIPP, on the other hand, acknowledges programimprovement, so providing useful informationfor decision makers during all phases of programdevelopment even when the program is still beingdeveloped (7). The NWKM has broadened thescope of traditional model by adding someprocess measures enabling evaluators to interpretthe outcome evaluation results, but with the aimof proving an educational program.Overall, notwithstanding some improvement,NWKM has still some theoretical differenceswith the CIPP model resulting in variedmethodological and practical preferences.However, it is not unexpected to witness moreconvergence around these evaluation models withgreater knowledge and experience in the future. UR - https://jamp.sums.ac.ir/article_41014.html L1 - https://jamp.sums.ac.ir/article_41014_b1c296482324738a973e95177a929ca0.pdf ER -