Daniel Kendrick, MD, MAEd, Michigan Medicine Department of Surgery Center for Surgical Training and Research Research Fellow
With the growing concern regarding the readiness of graduates in general surgery to enter into independent practice, it is imperative that we accurately assess the progression of resident competence throughout the training process. At the same time, we must do so in a way that minimizes measurement burden on both faculty and trainees.
Surprisingly, it is currently unknown how surgical training programs determine competence and how much individual programs have in common. There are some measures of resident performance that are consistent across all programs including the ABSITE exam, ACGME case logging, and Milestone ratings, but programs use many other assessments that have been developed locally. This presents a problem in both understanding the results of these tools as well as validating and improving them.
Ideally, all training programs would have a standardized set of assessment tools that, together, accurately predict the eventual clinical performance of a trainee and allow for targeted intervention prior to graduation. In order to realize this goal, we must do several things. First, we must define what performance domains are important to assess during general surgery residents, (i.e. what defines the performance of a competent practicing general surgeon). Next, we must understand what different training programs are doing in order to design a toolkit that is optimal for their diverse needs/resources. Last, we must validate these tools’ ability to subsequently predict early career clinical performance of graduating residents.
Much effort has gone into determining important performance domains in general surgery through the development of the ACGME General Surgery Milestones. Through their application, the Milestones guide training programs to develop methods to measure and report trainee competence in each of sixteen important areas of performance. It follows that many programs, working in parallel, have arrived at distinct local evaluation processes, and the next step is to understand what these different methods are. Before an effective standardized toolkit can be built, we must assess what tools/strategies are currently being used by training programs to measure resident performance and how they fit within each of these domains.
In order to address this, we plan to conduct a survey-based assessment inventory of the evaluation process at all sites participating in the Variability In Trainee Autonomy and Learning in Surgery (VITALS) trial. We will synthesize these into a comprehensive picture of how training programs in general surgery are currently measuring trainee performance. This will identify gaps in our current assessment process with the eventual goal to build and validate a standardized assessment toolkit to be shared across all training programs.
Please email Dr. Kendrick for more information at This email address is being protected from spambots. You need JavaScript enabled to view it.