Evaluation of ophthalmology residents' self-assessments and peer assessments in simulated surgery.
Objective: To evaluate the accuracy of ophthalmology residents' self-assessment and peer assessment of surgical skills in a simulation setting.
Design: Simulation laboratory assessment. Participants: Ophthalmology residents novice to cataract surgery.
Methods: A modified International Council of Ophthalmology's Ophthalmology Surgical Competency Assessment Rubric: Phacoemulsification structured assessment tool for simulated cataract surgery was established by conventional Delphi method. Residents completed 10 independent simulated surgeries that were video-recorded. Two experts graded the videos using the assessment tool. Participants performed self-assessment of their own 10 videos, and peer assessment of 10 of their peers' videos.
Results: Nine cataract surgery experts provided feedback and modifications for the assessment tool. Agreement for the first round of the Delphi method ranged from 55.56% to 100%. Second round agreement was 80% or greater for all answers. The final assessment tool comprised (i) 4 procedural items scored from 0 (not performed) to 7 (competent), and (ii) a global rating scale (GRS) requiring yes/no answers to 4 performance-related questions. Eight residents participated in the study. There was excellent expert inter-rater reliability intraclass correlation ((ICC) = 0.844, 0.875, 0.809, 0.844) and fair to excellent inter-rater reliability between expert and peer scores (ICC = 0.702, 0.831, 0.521, 0.423), but systematic disagreement (ICC = -0.428, -0.038) or poor inter-rater reliability (ICC = 0.298, 0.362) between expert and self-scores. There was poor agreement for all GRS questions (κ statistic < 0.40) except 2 comparisons.
Conclusions: In the simulation setting, experts were able to reliably assess trainees' performance using the assessment tool. Participants demonstrated inconsistency in assessing their own skills; however, they were adequate at assessing their peers' overall performance.