Document Type : Letter to Editor

Author

Education Development Center, Department of Medical Education, Health Professions Education Research Center, Tehran University of Medical Sciences, Tehran, Iran

Abstract

In a young field like educational programevaluation, it is inevitable that conceptualframeworks such as Kirkpatrick model arerevised with time and with greater knowledge.The New World Kirkpatrick Model (NWKM)is the new version of Kirkpatrick model whichis more welcome to context and process, andhence probably much closer to the context–input–process–product (CIPP) model (1). The aimof this paper is to explore the similarities anddifferences between three well-known evaluationmodels including the original and new versionsof Kirkpatrick model and CIPP model.The original version of Kirkpatrick model is anoutcome-focused model evaluating the outcomesof an educational program, for instance, in the fieldof medical education, in four levels of reaction,learning, transfer and impact, respectively (2). Themodel is rooted in reductionist approach suggestingthat the educational program success or lack ofsuccess can be explained simply by reducing theprogram into its elements and examining them (i.e.its outcomes) (3). Yet, Kirkpatrick’s original modelfails to provide the evaluators with an insightinto the underlying mechanisms that inhibit orfacilitate the achievement of program outcomes(4). In response to this shortcoming, the newversion of Kirkpatrick model added new elementsto recognize the complexities of the educationalprogram context (5).The most highlighted changes have beenoccurred at Level 3 to include processes thatenable or hinder the application of learnedknowledge or skills. The required drivers thatreinforce, monitor, encourage, and rewardlearners to apply what is learned during training,on the job learning that happens outside theformal program and Learners’ motivation andcommitment to improve their performance on thejob are interfering factors that may influence thegiven outcomes at level 3. Learners’ confidenceand commitment, and learners’ engagement andsubject relevance ware added to Level 2 and level1, respectively, to broaden the scope of evaluationat these two levels (5).Although the NWKM appears to betterembrace the complexity of educational programs,some investigators may declare that it would besimilar to CIPP evaluation model. I suppose thatthere are some fundamental differences betweenthem. The CIPP model stems from the complexitytheory that takes into account the educationalprogram as an open system with emergentdynamical interactions among its componentparts and the surrounding environment. As aresult, CIPP pays explicit and implicit attentionto the program context by considering contextevaluation as a separate component of fourcomplementary sets of evaluation studies, aswell as identifying the contextual factors inother components of the model by employinga variety of qualitative methods (6). On theother hand, the NWKM is limited to measuringsome confounding factors such as learnercharacteristics or organizational factors onprogram outcome achievement (1).Kirkpatrick, like many traditional programevaluation models, focuses on proving something(i.e. outcome achievement) about a program. Thus,it is usually conducted at the end of the program.CIPP, on the other hand, acknowledges programimprovement, so providing useful informationfor decision makers during all phases of programdevelopment even when the program is still beingdeveloped (7). The NWKM has broadened thescope of traditional model by adding someprocess measures enabling evaluators to interpretthe outcome evaluation results, but with the aimof proving an educational program.Overall, notwithstanding some improvement,NWKM has still some theoretical differenceswith the CIPP model resulting in variedmethodological and practical preferences.However, it is not unexpected to witness moreconvergence around these evaluation models withgreater knowledge and experience in the future.

  1. Moreau KA. Has the new Kirkpatrick generation built a
  2. better hammer for our evaluation toolbox? Med Teach.
  3. ; 39:999 –1001.
  4. Kirkpatrick DL, Kirkpatrick JD. Evaluating training
  5. programs: the four levels. San Francisco (CA): Berrett-
  6. Koehler; 2006.
  7. Frye AW, Hemmer PA. Program evaluation models
  8. and related theories: AMEE Guide No. 67. Med Teach.
  9. ; 34:e288–99.
  10. Parker K, Burrows G, Nash H, Rosenblum ND. Going
  11. beyond Kirkpatrick in evaluating a clinician scientist
  12. program: it’s not “if it works” but “how it works”. Acad
  13. Med. 2011; 86:1389–96.
  14. Kirkpatrick JD, Kirkpatrick WK. Kirkpatrick’s four
  15. levels of training evaluation. Alexandria (VA): ATD
  16. Press; 2016.
  17. Mirzazadeh A, Gandomkar R, Mortaz Hejri S,
  18. Hassanzadeh GhR, Emadi Koochak H, Golestani A,
  19. et al. Undergraduate medical education programme
  20. renewal: a longitudinal context, input, process and
  21. product evaluation study. Perspect Med Educ. 2016;
  22. :15–23.
  23. Gandomkar R, Jalili M, Mirzazadeh A. Evaluating
  24. assessment programs using program evaluation
  25. models. Med Teach. 2015; 37: 792–3.