Document Type : Original Article

Authors

Department of Anesthesiology, School of Allied Medical Sciences, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran

10.30476/jamp.2024.102318.1969

Abstract

Introduction: Lack of clinical competence can endanger the patient’s safety and reduce the quality of providing health care services. The aim of this study was to examine the impact of formative Objective Structured Clinical Examinations (OSCEs) with immediate verbal and visual feedback on the clinical competence of fourth-year nurse anesthesia students.
Methods: This was a single blind quasi-experimental study with a pre-test/post-test design in compliance with the CONSORT (Consolidated Standards of Reporting Trials) statement. Fortyeight
students were allocated to intervention (n=24) and control (n=24) groups. During the academic semester, the students of the intervention group attended 3 sessions of formative OSCE (5 stations) with immediate verbal and visual feedback. In the control group, however, the students received in-person feedback according to curricular routine. The data collection tool included
two sections. The first section included a questionnaire to collect demographic information such as age, sex, grade point average and marital status. In the second section, clinical competence
of students was measured by Common Clinical Assessment Tool (CCAT). The collected data were analyzed by Analysis of Covariance, paired T-test, Chi-square, and Fischer’s exact test in
SPSS, version 16.
Results: Comparing post-test scores by ANCOVA showed a significant difference between groups (P=0.001) because there was a significant positive change in the overall clinical competence
score in the intervention group after receiving formative OSCE.
Conclusion: This study showed that regular implementation of formative OSCEs in nurse anesthesia education fosters learning and has a positive effect on improving students’ educational
behaviors and helps them learn more efficiently. However, it is recommended to conduct more studies with a larger number of participants to confirm this conclusion.

Highlights

ALI KHALAFI

AKBAR ABBASI

 

Keywords

Introduction

Assessing the clinical competence of the students of medical sciences is the sine qua non of maintaining the credibility of educational and healthcare systems ( 1 ). Clinical competency is a set of knowledge, skills and attitudes in dealing with a specific clinical situation. Clinical competence extends its positive influence beyond just patient outcomes, demonstrably impacting healthcare workers' self-efficacy, collegial empathy, and overall job satisfaction ( 2 ). Lack of specified and well-defined standard criteria for evaluating the level of clinical competence of students is an enormous challenge for the healthcare system ( 3 ). Results of previous studies have shown that lack of clinical competence can endanger the patient's safety and reduce the quality of health care services ( 4 ). Therefore, health care systems should constantly evaluate and prioritize clinical competence indicators ( 5 ).

As the cornerstone of any educational attempt, evaluation provides evidence of success in achieving goals. Depending on the time and purpose, evaluation can be diagnostic, formative, or summative ( 6 ). Formative evaluations (measurement of learning) are usually performed at certain intervals during the semester or academic year, and their purpose is to provide feedback to improve and stabilize learning. The results of previous studies have shown that feedback has been a neglected area in medical sciences education ( 7 ). Feedback as the main element of formative evaluation plays a very important role in improving the qualitative and quantitative level of students' learning and bridging the link between assessment and learning ( 8 ). Feedback is effective and constructive when teachers simultaneously pay attention to students' cognitive, psychomotor and emotional characteristics. A constructive feedback will not only increase the awareness of each student with respect to their progress and academic achievement (cognitive and psycho-motor aspect) but also gradually foster in them a sense of self-efficacy and control over learning (motivational aspect) ( 9 ).

Known as the gold standard for evaluating clinical performance, the Objective Structured Clinical Examination (OSCE) test is a clinical competency measurement method that focuses on observable results and behaviors. The most significant advantage of using OSCE is the integration of theory and practice, which leads the student to learn in simulated environments ( 10 ). This test can evaluate the level of achievement of educational goals in terms of the students' cognitive, emotional, and psycho-motor abilities ( 11 ). The Objective Structured Clinical Examination (OSCE) format serves a dual purpose: it facilitates the application of evidence-based knowledge and skills by students, while simultaneously offering instructors a structured framework for identifying and addressing performance deficiencies ( 12 ). The OSCE test is organized objectively in different stations, and the examinees are asked to perform certain clinical tasks in each station. These include skills such as history taking, physical examination, counseling, or patient management ( 13 ).

The OSCE test can be used to measure the clinical competence of students cumulatively or formatively ( 11 ) It has been shown that increasing the exposure of students to formative OSCEs makes them perform better in their cumulative OSCEs ( 12 ) .Despite the existence of positive evidence, no study has been conducted in Iran to measure the impact of formative OSCE on the level of clinical competence of students. Also, the literature indicates contradictory results of studies conducted in different parts of the world. For example, in a recent study, the formative OSCE test improved the level of clinical competence of medical internship students ( 13 ) Another study indicated that formative OSCE improved the students' cumulative OSCE scores ( 14 ). However, the results of a previous study showed that conducting a formative OSCE hds no effect on the cumulative OSCE scores of the intervention group compared to the control group ( 15 ).

Accordingly, it is imperative that students of medical sciences achieve an appropriate level of clinical competence before entering the clinical arena so that they will be able to provide medical and nursing services efficiently and effectively. Advancements in science and technology have revolutionized feedback delivery. The use of diverse, immediate, and visually-oriented feedback mechanisms has demonstrably enhanced the acquisition and retention of clinical skills by learners. Nevertheless, there is paucity of research addressing the impact of assessing the competence level of nurse anesthesia students using formative OSCE along with immediate and visual feedback. Moreover, contradictory results have been reported by studies examining the impact of formative OSCE on cumulative OSCE scores. Therefore, the present study was conducted to investigate the effect of this method on the level of clinical competence of nurse anesthesia students.

Methods

Design

This was a single blind quasi-experimental study with a pre-test/post-test design conducted from October 2023 to January 2024.

Settings

The intervention of this study was carried out in the clinical skills hall of the School of Allied Medical Sciences of Ahvaz Jundishapur University of Medical Sciences (AJUMS). The pre-test and post-test were administered in Imam Khomeini Hospital of Ahvaz, affiliated to AJUMS, Ahvaz, Iran.

Participants

In this study, 52 fourth-year nursing anesthesia students of AJUMS who met the inclusion criteria were initially included, using census method. The study recruited the participants based on the following inclusion criteria: enrollment in the fourth year of the nurse anesthesia program, consistent attendance at formative OSCE sessions, expressed willingness to participate in the research, and provision of informed consent. The participants were excluded from the study if they participated in parallel interventions or withdrew from participating in the study at any stage of the study. The data of 4 students who met the exclusion criteria were not subjected to statistical analysis. Finally, 48 students were allocated to intervention and control groups.

Sample size

Eligible students who met the inclusion criteria and voluntarily agreed to participate were recruited. Ultimately, 52 eligible participants were enrolled, and 4 withdrew and had their data excluded from the analysis. Based on the study conducted by Chen who examined the clinical competence of nursing students, the required information was entered into the software and the sample size was determined ( 16 ) (Figure1 shows the diagram of the study).

Figure 1. The diagram of study

Instruments

In this study, the data collection tool included two sections. The first section included a questionnaire to collect demographic information such as age, sex, grade point average and marital status.

In the second section, the data were collected through Common Clinical Assessment Tool (CCAT). CCAT was commissioned by the Council on Accreditation of Nurse Anesthesia Educational Programs in the United States (2019) by a group of nurse anesthesia professors and trainers in order to assess the clinical competence of Student Register Nurse Anesthesia (SRNA) before entering the clinical work environment. CCAT includes 4 areas, 22 competencies and 5 progress indicators: patient safety and anesthesia care (6 competencies), knowledge and critical thinking (6 competencies), professional communication and collaboration (4 competencies), and professional role (6 competencies). Progress indicators are valid descriptions of the expected behaviors for each competency based on 5 levels (not applicable) - (safety concern) - (Novice) - (Advanced beginner) - (Competent and Proficient to enter practice) in CCAT ( 17 ). The validity of the CCAT was calculated by its developers (CVI = 83%), but its reliability was not reported. In this study, the content validity and reliability of CCAT were re-evaluated. To assess content validity, the opinions of three anesthesiologists with over ten years of experience as faculty members, four faculty members in the Department of Anesthesia Nursing with an average of over ten years of teaching experience, three master's degree holders in anesthesia nursing education, and two practicing nurse anesthetists were sought. According to Lawshe's table, the acceptable values for CVI and CVR for 12 experts were (CVR > 0.56, CVI > 0.79). The CCAT questions surpassed these specified cut-offs, indicating higher CVI and CVR values. Consequently, all questions were retained in the final Persian version, ensuring consistency with the original version. Furthermore, for assessing reliability, intra-class correlation coefficient (ICC) was utilized. To achieve this, two evaluators evaluated the activities of nurse anesthetist students while performing anesthesia care, using the CCAT. The ICC was computed using a two-way random effects model, which yielded a value of 0.824 (CI 95%: [0.349, 0.955]; P-value=0.007).

Procedure

After the necessary approval was obtained from AJUMS Research Vice-Chancellor, this study was conducted in 5 stages during one academic semester:

The first stage: Briefing sessions

The research protocol commenced with a two-pronged orientation program designed to acquaint participants with the study's objectives and procedures. All participants attended a one-hour session, being introduced to the OSCE evaluation method. Subsequently, a dedicated two-hour session for the intervention group elaborated on the formative OSCE format and the research team's feedback protocol. Following these sessions, potential participants were presented with a written consent form and a demographic questionnaire for completion.

The second stage: Selection of evaluators

The study employed a five-member panel of evaluators, each possessing over five years of experience as a nurse anesthesia instructor. To ensure standardization and inter-rater reliability, a one-hour training session was conducted within the clinical skills hall of the School of Allied Medical Sciences (study implementation site). This session facilitated team familiarization and comprehensive training on the relevant OSCE stations. Notably, these evaluators were not involved in the pre-test or post-test assessments. Additionally, a separate staff member, experienced in clinical work and unknown to the students, was selected and trained to act as a standardized patient.

The third stage: Pre-test

The pre-test was conducted to evaluate the competence of all students. This evaluation was done by structured observation method, using CCAT tool in the operating room of Imam Khomeini Hospital on real patients and during a routine general anesthesia. To this aim, two other evaluators (different from the 5 main evaluators of the study) with 10 years of clinical experience in anesthesia nursing were invited. These evaluators were blinded to group allocation. In order to prevent publication of exam information, the students attended the operating room at 8 am and were divided into two groups. Each student attended the operating room as a nurse anesthetist along with the charge nurse anesthetist. However, all procedures before, during and after anesthesia which were performed by the student under the full supervision of the charge nurse anesthetist, were carefully evaluated by the evaluator, and recorded in the CCAT checklist. The time allotted to perform the tasks for each student was similar to the time set in the intervention of 10 minutes. No feedback was provided by the evaluators during the evaluation and activity for the student.

The fourth stage: Intervention

This included the implementation of formative OSCE in the intervention group, which lasted for three months during an academic semester. At first, OSCE stations were designed by the research team along with the evaluation checklist. The contents of the stations and expected competencies had been designed and blueprinted based on the expected abilities of nurse anesthetists in Iran and the CCAT tool used in the pre-test and post-test. All the content, order and scenarios of the stations were approved by three faculty members of the Anesthesia Nursing Department. The scenarios underwent two to three rounds of review and quality control by faculty, and standards were set via the modified Angoff method, with marking checklists and passing scores determined. The designed OSCE consisted of 5 stations, which included preparation of the anesthesia room, preoperative patient assessment, knowledge and critical thinking, actions before induction of anesthesia (including monitoring the patient, paying attention to patient safety, and talking to the patient to reduce stress), actions related to maintenance of induction and perianesthesia, and measures related to after induction and preparing the patient for the emergence (Table 1). The focus of all stations was on the important and vital skills necessary in the administration of a general anesthesia by the nurse anesthetist. At each station, an explanatory board was installed, giving the description of the station and the time allotted to it. Each OSCE station was overseen by a dedicated evaluator. To facilitate learning and skill refinement, evaluators provided immediate, formative feedback to students upon completion of each station, highlighting identified areas for improvement. Evaluators documented the student performance at each station, using a standardized checklist. This data, along with the provided feedback, facilitated the identification of individual strengths and weaknesses. These insights were then incorporated into subsequent intervention sessions to address specific learning needs and reinforce areas of proficiency. The time allotted to passing each station was 10 minutes, and 5 minutes were also given so that the evaluator could provide feedback. The study integrated a formative OSCE component delivered at three distinct points: the conclusion of each month within the academic semester. Notably, this coincided with students' ongoing clinical rotations and studies as outlined by the department's approved program.

Preparation of tracheal tube suitable for adults and check cuff. The first station
Preparation of suitable airway for adults and children. Anesthesia room preparation
Preparation of laryngoscope and adult blade check.
Preparing and checking the suction device.
Checking the presence of adhesive tape.
Preparation of anesthetics for the operating room.
Dilution of anesthetic drugs (midazolam-thiopental) for children.
Checking the details of the file with the patient's bracelet. The second station
Having an informed consent in the file. Pre-operative assessment and knowledge and critical thinking
Knowing about the patient's medications (aspirin, blood pressure medication, etc.).
Knowing about food and drug allergies.
Examining the patient's airway (Mallampati-Thyromental).
Examining the patient's physical class based on ASA.
Knowing about primary ischemia in the cardiogram and checking the initial tests (checking important cardiac leads).
Checking the patient's NPO state.
Patient monitoring. The third station
Choosing the right mask and tube. Procedures before induction of anesthesia
Preparation of the underlay and bandage and adjusting the height of the bed (patient safety).
Serum therapy of the patient before induction.
Communication with the patient.
Correctly taking a C-shaped mask with one hand. The fourth station
Correct intubation. Procedures during induction and perianesthesiaReturning the patient's manual respiration.
Holding the laryngoscope correctly.
Correct insertion of the laryngoscope from right to left without tooth support.
Fixing the tracheal tube with bandage or glue.
Controlling blood pressure manually.
Calculating the intake of preservative liquids with the relevant formulas.
Calculating allowable bleeding based on the ABL formula.
Controlling areas under pressure (elbows-back of the head).
Returning the patient’s manual respiration. The fifth station
Reducing anesthetic gases. Post-anesthesia procedures
Suctioning secretions before extubation.
Suctioning secretions after extubation.
Applying positive pressure before extubation.
Wearing a mask after extubating the patient.
Delivery to recovery and respiratory care.
Table 1.Description of the stations of the formative OSCE test

At the end of the first month, the first stage of the intervention, which was the first formative OSCE, was implemented in the clinical skills hall. At first, the students of the intervention group gathered in a classroom, and each student was asked to enter the clinical skills hall to start the work. The students started from the first station until they reached the end of the anesthesia process (i.e., emergence) at the fifth station. At the first station, the student had to perform all procedures including checking the anesthesia machine, preparing serum, preparing the tracheal and airway tubes, checking and preparing the laryngoscope and other necessary equipment for a hypothetical patient who was supposed to undergo a cervical fracture surgery.

The second station was the pre-operative assessment and knowledge and critical thinking which was completely interactive and included a trained hypothetical patient with whom the students communicated according to an already prepared hypothetical hospital file. The students performed the necessary measures in the delivery and examination of the patient, including examination of the medical records and history, examination and inspection of the airway, and preparation of the anesthesia program according to the obtained information. In this station, the evaluator helped the students so that they could do a complete review of all the possibilities regarding the patient's type of anesthesia and the risks involved.

The third station included the necessary measures before induction. This included ensuring the safety of the patient on the bed, full monitoring of the patient, and verbal communication to reduce stress. In the fourth station, the student had to perform skills such as intubation and laryngeal insertion, masking and airway positions including jaw thrust, and tube fixation. In this station, the evaluator, focusing on the correct way of performing these skills, gave explanations to the students about important events such as facing difficult intubation and how to manage it.

Finally, the fifth station included the necessary measures to perform the process of emergence. In this station, the evaluator provided tips to the students about important events in the process of emergence, including laryngospasm, correct suction, oxygen delivery after extubation, and the importance of masking at the end of anesthesia. All the activities of the students in each station were filmed, and the feedback was recorded by the research team and then sent to the students individually. Upon completion of the procedures, each student left the hall through a separate exit and rested in a separate classroom. Therefore, there was no verbal interaction between the students. The performance and scores of the intervention had no effect on the official scores of the courses of the students in the academic semester, and participation was entirely voluntary.

During the time interval between all three formative OSCE courses, clinical instructors were responsible for training students in both the intervention and control groups in the internship, and they provided in-person feedback in the internship according to curricular routine. The students of both groups could communicate with the instructor and the research team at any time during the study to raise their questions and problems. At the end of the second and third months, the second and third formative OSCE were performed for the intervention group with the same method as the first OSCE, and feedback was provided. In this way, the students of the control group received routine clinical training, but the intervention group also participated in three rounds of formative test.

The fifth stage: Post-test

After the completion of the intervention in this stage, fifteen days after the last OSCE, the post-test was administered in the operating room of Imam Khomeini University Hospital under completely realistic conditions.

Data analysis

The data were analyzed by descriptive statistics, including mean, standard deviation, frequency, and percent. The Kolmogorov-Smirnov test checked the normality of the data. In the case of normally distributed quantitative variables, the data were analyzed by ANCOVA and paired T-test. Chi-square and Fischer's exact test were used to analyze qualitative data. All statistical analyses were performed using SPSS version 22. The significance level was 0.05.

Ethics approval and consent to participate

The present study was approved by the Ethics Committee of AJUMS (IR.AJUMS.REC.1402.303) and was carried out in accordance with the provisions of the 2013 Declaration of Helsinki. The objectives, procedures, and conditions of the study were fully explained to the potential participants. Written informed consent was obtained from all participating students. Also, confidentiality of data and anonymity of students were guaranteed in the entire study process.

Results

The study enrolled a total of 48 participants, with a gender distribution of 16 males (33.3%) and 32 females (66.7%). Statistical analysis (p=0.540) revealed no significant difference in gender composition between the intervention and control groups. Similarly, marital status distribution showed no statistically significant inter-group variation (p>0.999). In terms of age, the majority of participants (n=31, 64.6%) were 21 years old, while the remaining participants (n=17, 35.4%) were 22 years old. Age distribution across the intervention and control groups demonstrated homogeneity (p=0.131). Also, no significant difference was observed between the two intervention (17.39±0.64) and control groups (17.55±0.58) in terms of Grade point average (p=0.364) (Table 2).

Demographic information Control Intervention Total P
Grade point average 17.55±0.58 17.39±0.64 17.47±0.61 0.364
Sex Male 7(14.6%) 9(18.8%) 16(33.3%) 0.540
Female 17(35.4%) 15(31.3%) 32(66.7%)
Marital status Single 20(41.7%) 21(43.8%) 41(85.4%) >0.999
Married 4(8.3%) 3(6.3%) 7(14.6%)
Age (year) 21 13(27.1%) 18(37.5%) 31(64.6%) 0.131
22 11(22.9%) 6(12.5%) 17(35.4%)
Table 2.Frequency (percentage) of demographic information of participants in the control and intervention groups

Table 3 shows the unadjusted and adjusted Mean and Standard Deviation of OSCE scores for clinical competence levels of students. This table also presents the results of an analysis of covariance (ANCOVA) employed to compare OSCE scores. This analysis specifically focuses on post-intervention scores within the intervention group relative to pre-intervention scores in the control group. The results show a significant difference in the scores of patient safety and anesthesia care after the intervention compared with pre-intervention control, with the control group scoring 10.24 lower than the intervention group (P<0.001). Similarly, there was a significant difference in critical thinking scores between the two groups after the intervention with pre-intervention control, with the control group scoring significantly lower than the intervention group (P<0.001). The professional communication and collaboration scores in the control group were also significantly lower than those in the intervention group after the intervention compared with pre-intervention control (P<0.001). Additionally, the professionalism scores in the control group were significantly lower (P<0.001), as were the overall questionnaire scores (P<0.001).

variables Unadjusted Adjusted ANCOVA
Mean±SD Mean±SE β SE P
Safety
Control 10.66±1.60 10.64±0.40 REF REF REF
Intervention 20.87±1.87 20.89±0.37 10.24 0.53 <0.001
Critical thinking
Control 9.91±1.10 9.96±0.25 REF REF REF
Intervention 19.25±1.26 19.21±0.25 9.24 0.36 <0.001
Professional communication
Control 7.25±1.35 7.20±0.30 REF REF REF
Intervention 13.00±1.53 13.04±0.30 5.84 0.43 <0.001
Professionalism
Control 12.58±1.50 12.57±0.26 REF REF REF
Intervention 21.17±1.00 21.17±0.26 8.60 0.37 <0.001
Total score
Control 40.42±2.84 40.25±0.66 REF REF REF
Intervention 74.29±3.44 74.45±0.66 34.21 0.96 <0.001
Table 3.ANCOVA summary table examining the impact of the OSCE assessment on the clinical competence levels of students

Discussion

The aim of this study was to investigate the effect of formative OSCE on the level of clinical competence of nursing anesthesia students. The findings showed that formative OSCE accompanied with immediate feedback significantly improved the clinical competence of nurse anesthesia students in terms of room preparation, preoperative assessment of the patient, actions before induction of anesthesia, intra- and periprocedural actions, post-anesthesia actions, and the emergence process. Students who participated in the formative OSCE obtained higher scores in terms of preparation and practical skills compared to before the intervention. However, students who did not participate in the formative OSCE experienced no significant change compared to before the intervention. The improved performance and competence indicate that the formative OSCE experience may exert a useful educational influence on the nursing anesthesia students. However, the reasons for the improved performance after the formative OSCE in the post-test may include: increased motivation after obtaining a poor score on the formative OSCE, improved test-taking techniques due to practice opportunities, and development of clinical skills in the time interval between formative OSCEs.

A recent study has shown that when a formative OSCE program consisted of four sessions that were administered once every two weeks and accompanied with immediate feedback, it has a positive effect on student learning ( 17 ). Also, the data of the present study showed that after three sessions of formative OSCE, the clinical competence of the intervention group was significantly better than that of the control group in all tested stations. Therefore, it can be argued that frequent repetition of OSCE tests, especially with the immediate verbal feedback and future visual feedback by the teacher, along with guidance during OSCE provided by experienced clinical instructors, will increase the level of clinical competence of the students, which is in line with the results of the previous study. In our study, the evaluators asked more diverse questions with the aim of better measuring the students' knowledge. However, in order to provide effective feedback, evaluators and instructors must have a variety of frameworks in mind to help students gain a deeper understanding of the factors influencing their reasoning or behavior, to reexamine their assumptions and values, and to develop a wider range of possible responses and interventions ( 18 ).

Prior to the commencement of the intervention, some meetings were held to familiarize the participants with the objectives of the formative OSCE tests. Aligning the curriculum with learning objectives and educational activities helps students to be exposed to core educational concepts. It has been previously shown that formative OSCE is more effective when students understand the end goal and understand the outcome as a goal. Therefore, it seems that the previous familiarity of students is effective in enhancing and improving their performance ( 17 ). In this regard, the research team held one-on-one meetings with the instructors to investigate the students' problems and behaviors during the exam after conducting the formative OSCE. It is quite clear that holding briefing sessions with educational goals and evaluation criteria is effective in order to benefit from formative OSCE.

Another advantage of multiple formative OSCEs was that the students could practice their skills more confidently and strengthen their communication and interaction skills with hypothetical patients. The student feedback highlighted the perceived benefits of immediate feedback. The participants reported that it facilitated the identification of performance gaps, enabling them to implement corrective strategies and enhance their communication and interactive skills, as evidenced by the study results. Previous studies have also shown that immediate feedback can increase students' competence ( 12 ).

In this study, it was observed that students attach significant importance to formative OSCEs. In some cases, students would welcome and appreciate the feedback from the evaluators. Also, the students had a positive opinion about the feedback recorded for each person by the faculty member. Choosing evaluators with clinical experience also seems to play an important role in providing quality feedback because their experience makes it possible to better stimulate students' clinical reasoning.

There is conflicting evidence regarding the reproducibility of knowledge and critical thinking stations in OSCE. Mavis, et al. have suggested that critical thinking skills are more stable in written form than in OSCE. However, Rosebraugh, et al. found high reproducibility for the knowledge and thinking components of their OSCE stations. The results of the present study showed that when the issues related to problem solving and knowledge in performing functional skills were identified and given special attention, a significant improvement in performance and competence was achieved, as confirmed by the comparison of the pre-test and post-test scores. Furthermore, the study suggests that the enhanced communication skills fostered through formative OSCE interactions with standardized patients, coupled with the evaluators' emphasis and feedback, were likely translated into improved student-patient relationships in real-world clinical settings ( 19 ).

It seems that few students paid attention to the learning aspect of formative OSCE, which indicates that students probably had an insufficient understanding of this type of clinical assessment. This can be improved by better communication with students and holding more training sessions. In developing the OSCE stations, maximum effort was made so that these stations would resemble cases in the real clinical environment as much as possible. This similarity and alignment allow the students to have a similar experience before they are in real situations, and students will learn with a wider view and a better depth. This study showed that holding formative OSCE sessions for universities and colleges is completely feasible. However, in order to guarantee implementation and transferability to other departments, there must be a balance between the available resources, professors, time available to professors, the number of students involved, and the number of OSCE sessions.

Previous research has shown that the reliability of using OSCE to assess clinical competence approaches an acceptable standard only when the exam is at least 2 hours long or has at least 10 stations ( 20 ). Given the complexity of competence assessment, instead of directly measuring clinical competence using tests, one way that may improve our ability to truly assess participants would be focusing on more functional skills. These include preparing patients for specialized surgeries, working with specialized tools, or performing specialized skills related to gynecology or pediatrics. Future studies may use and combine more complex types of feedback, such as online video feedback or simultaneous implementation in multiple universities online, which will lead to more meaningful results.

In short, formative OSCE is considered a positive and useful activity allowing the students to use their learning experiences in real clinical environments. This study showed that the use of formative OSCE with immediate verbal and visual feedback can significantly increase the clinical competence of final year students. Based on the positive outcomes observed in this study, the authors recommend integrating formative OSCEs into the curriculum for final-year nurse anesthesia students. This implementation would ideally occur prior to their transition into real-world clinical practice. The potential benefits include enhanced development across cognitive, emotional, and psychomotor domains, ultimately preparing students for success in the clinical environment.

Limitations

A key limitation of this study pertains to the sample size. The research exclusively targeted final-year nursing anesthesia students, which restricts the generalizability of the findings to other populations. Future research efforts could benefit from incorporating a larger and more diverse participants. Although small sample size was the main limitation of this study, it should be noted that it is easier to control and manage the training of formative OSCE for a sample size of this magnitude.

Conclusion

Formative OSCEs have a positive role in improving the performance and clinical competence of anesthesia nursing students in the real clinical environment. Therefore, it is very beneficial to conduct several formative OSCEs for nurse anesthesia students and enable them to practice specialized skills. It was shown that formative OSCEs along with immediate feedback play a very important role in not only increasing reasoning and critical thinking but also enhancing personal relationships and improving students' specialized skills. Also, the visual feedback by the professors was also received very positively by the students. Formative OSCE along with feedback helped students to have a better understanding of the existing realities and behaviors in the clinical environment and also helped their professors to identify students who needed more educational intervention. Finally, this study showed that formative evaluations serve as a point for learning and have a positive effect on improving students' educational behaviors and help them learn more efficiently. However, it is recommended to conduct more studies with a larger number of participants to confirm this conclusion.

Acknowledgements

The researchers hereby appreciate the financial support of Ahvaz Jundishapur University of Medical Sciences, as well as the sincere cooperation of the students participating in this study.

Authors contributions

AKh: Conceptualization, Methodology, Formal analysis, Writing - Review & Editing, Project administration; AA: Conceptualization, Methodology, Data curation, Formal analysis, Software, Investigation, Writing-Original Draft; NSS: Conceptualization, Methodology, Formal analysis, Writing - Review & Editing, Supervision; MA: Conceptualization, Methodology, Data curation, Formal analysis, Software, Investigation, Writing-Original Draft.

Conflict of interest

The authors declare that they have no conflict of interest.

References

  1. Hui T, Zakeri MA, Soltanmoradi Y, Rahimi N, Hossini Rafsanjanipoor SM, Nouroozi M, et al. Nurses’ clinical competency and its correlates: before and during the COVID-19 outbreak. BMC Nursing. 2023; 22(1):156.
  2. Almarwani AM, Alzahrani NS. Factors affecting the development of clinical nurses’ competency: A systematic review. Nurse Education in Practice. 2023; 73:103826.
  3. Ghanbari A, Hasandoost F, Lyili EK, Khomeiran RT, Momeni M. Assessing Emergency Nurses&#039; Clinical Competency: An Exploratory Factor Analysis Study. Iran J Nurs Midwifery Res. 2017; 22(4):280-6.
  4. Lee KC, Ho CH, Yu CC, Chao YF. The development of a six-station OSCE for evaluating the clinical competency of the student nurses before graduation: A validity and reliability analysis. Nurse Education Today. 2020; 84:104247.
  5. Rowland K, Edberg D, Anderson L, Wright K. Features of Effective Clinical Competency Committees. J Grad Med Educ. 2023; 15(4):463-8.
  6. Tridane M, Belaaouad S, Benmokhtar S, Gourja B, Radid M. The Impact of Formative Assessment on the Learning Process and the Unreliability of the Mark for the Summative Evaluation. Procedia - Social and Behavioral Sciences. 2015; 197:680-5.
  7. Lee GB, Chiu AM. Assessment and feedback methods in competency-based medical education. Annals of Allergy, Asthma & Immunology. 2022; 128(3):256-62.
  8. Atkinson A, Watling CJ, Brand PLP. Feedback and coaching. Eur J Pediatr. 2022; 181(2):441-6.
  9. Stenberg M, Mangrio E, Bengtsson M, Carlson E. Formative peer assessment in higher healthcare education programmes: a scoping review. BMJ Open. 2021; 11(2):e045345.
  10. Plöger R, Abramian A, Egger EK, Mustea A, Sänger N, Plöger H, et al. Evaluation of an OSCE&#039;s implementation and a two-step approach for a theoretical and practical training program in Obstetrics and Gynecology. Front Med (Lausanne).. 2023; 10:1263862.
  11. Wu JW, Cheng HM, Huang SS, Liang JF, Huang CC, Yang LY, et al. Comparison of OSCE performance between 6- and 7-year medical school curricula in Taiwan. BMC Medical Education. 2022; 22(1):15.
  12. Lee MHM, Phua DH, Heng KWJ. The use of a formative OSCE to prepare emergency medicine residents for summative OSCEs: a mixed-methods cohort study. Int J Emerg Med. 2021; 14(1):62.
  13. Luo P, Shen J, Yu T, Zhang X, Zheng B, Yang J. Formative objective structured clinical examination with immediate feedback improves surgical clerks’ self-confidence and clinical competence. Med Teach. 2023; 45(2):212-8.
  14. Riaz M, Hussain F, Khan M. The role of formative objective structured clinical examinations on students’ performance in clinical years. International Journal of Research in Medical Sciences. 2021; 9(8):2231-4.
  15. Alkhateeb NE, Al-Dabbagh A, Ibrahim M, Al-Tawil NG. Effect of a Formative Objective Structured Clinical Examination on the Clinical Performance of Undergraduate Medical Students in a Summative Examination: A Randomized Controlled Trial. Indian Pediatr. 2019; 56(9):745-8.
  16. Chen SH, Chen SC, Lai YP, Chen PH, Yeh KY. The objective structured clinical examination as an assessment strategy for clinical competence in novice nursing practitioners in Taiwan. BMC Nurs. 2021; 20(1):91.
  17. Ouldali N, Le Roux E, Faye A, Leblanc C, Angoulvant F, Korb D, et al. Early formative objective structured clinical examinations for students in the pre-clinical years of medical education: A non-randomized controlled prospective pilot study. PLoS One. 2023; 18(12):e0294022.
  18. Junod Perron N, Louis-Simonet M, Cerutti B, Pfarrwaller E, Sommer J, Nendaz M. The quality of feedback during formative OSCEs depends on the tutors’ profile. BMC Medical Education. 2016; 16(1):293.
  19. Rosebraugh CJ, Speer AJ, Solomon DJ, Szauter KE, Ainsworth MA, Holden MD, et al. Setting standards and defining quality of performance in the validation of a standardized-patient examination format. Acad Med. 1997; 72(11):1012-4.
  20. Blamoun J, Hakemi A, Armstead T. A Guide for Medical Students and Residents Preparing for Formative, Summative, and Virtual Objective Structured Clinical Examination (OSCE): Twenty Tips and Pointers. Advances in medical education and practice. 2021; 12:973-8.