Early adopter advice series on Competence by Design
Dr. Paolo Campisi, professor and vice chair education, director of postgraduate education
Good advice from Dr. Paolo Campisi | University of Toronto
What was the goal of the resident-initiated Leaderboard project?
The goal of the Leaderboard project was to increase the rate of EPA completion, measured on a weekly basis and using a quality improvement methodology. It was simply a ‘scoreboard’ that kept a tally of the number of EPAs completed by the residents in the same cohort. It was not anonymous, and the residents were OK with that. The Leaderboard relied on the innate competitiveness of surgical residents to outperform their co-residents.
What were the project findings?
The Leaderboard was effective in improving the overall EPA completion rate. From baseline over 10 weeks, the weekly completion rate to 2.87 from 0.22 assessments per resident. One of the unintended positive outcomes was recognizing the value of regular feedback to residents, which is now reported anonymously. We also found we could apply the same strategy to faculty to inform them of their participation rating in the CBD assessments. Once the appropriate software is developed, we will be initiating regular reports for faculty, based on each teaching site. We did not have any negative findings, other than recognizing that an anonymous reporting approach was more favourable.
What challenges did you encounter?
At the time, we did not have software that could easily tabulate the results. We relied on our postgraduate education coordinator, Andrea Donovan, to manually calculate the numbers. Fortunately, this was for five residents, so it was manageable.
Would you recommend a Leaderboard in other programs?
Yes, I would recommend this strategy to other programs. As well, I would recommend finding ways to leverage data management systems to generate reports with more detailed metrics for learners and faculty. In addition to the rate of EPA completion, the software can measure the number of EPA requests initiated, the proportion completed, and the proportion completed that met expectations and demonstrated competency. These can be based on both faculty and teaching site. For larger programs, data management systems that can generate metrics in an automated fashion would be essential.