The SIMPL Team has been working hard on a fully updated SIMPL Website- check back on www.simpl.org soon for the new and improved site!
Check out all that we have accomplished together this year!
Don't hesitate to reach out to learn more!
We are a quality improvement collaborative. Much of our collective work has been focused on implementing SIMPL. SIMPL was the core of our initial work because “one cannot improve at scale what one cannot measure”1, and if you want to improve surgery you need an operative performance assessment. Now that SIMPL has been widely implemented we are ready to use SIMPL data as an outcome measure for other, larger improvement efforts. In other words, we are now ready to more fully become the quality improvement collaborative we envisioned in the very beginning.
As we embark on that shared work it has become clear that our current name—the Procedural Learning and Safety Collaborative—needs to be updated. We must have a name that better communicates our values and our vision. To that end, we have opened a “Rename PLSC Competition”. We provide here the rules and other information to help you develop a winning name.
Competition Requirements
The winning name must:
Other criteria that would be nice but are not required are that the name:
Submissions
Please email your ideas to This email address is being protected from spambots. You need JavaScript enabled to view it. before midnight on June 7th, 2020. The Procedural Learning and Safety Collaborative Steering Committee will judge the responses and pick a winner. It will be announced on our website and via email. The winner will receive a small token of our appreciation (PLSC swag and a $100 Amazon gift certificate).
Additional Information
Our Vision
Every physician who cares for a patient is competent.
Tag line
“Improving the quality of medical care by improving the quality of physician training”
Elevator pitch
Procedural Learning and Safety Collaborative (PLSC) is a non-profit educational quality improvement consortium focused on investigating and developing tools, curriculum, and policy to improve the training of physicians.
Principles
In our work, we are guided by the following principles:
Daniel Kendrick, MD, MAEd, Michigan Medicine Department of Surgery Center for Surgical Training and Research Research Fellow
With the growing concern regarding the readiness of graduates in general surgery to enter into independent practice, it is imperative that we accurately assess the progression of resident competence throughout the training process. At the same time, we must do so in a way that minimizes measurement burden on both faculty and trainees.
Surprisingly, it is currently unknown how surgical training programs determine competence and how much individual programs have in common. There are some measures of resident performance that are consistent across all programs including the ABSITE exam, ACGME case logging, and Milestone ratings, but programs use many other assessments that have been developed locally. This presents a problem in both understanding the results of these tools as well as validating and improving them.
Ideally, all training programs would have a standardized set of assessment tools that, together, accurately predict the eventual clinical performance of a trainee and allow for targeted intervention prior to graduation. In order to realize this goal, we must do several things. First, we must define what performance domains are important to assess during general surgery residents, (i.e. what defines the performance of a competent practicing general surgeon). Next, we must understand what different training programs are doing in order to design a toolkit that is optimal for their diverse needs/resources. Last, we must validate these tools’ ability to subsequently predict early career clinical performance of graduating residents.
Much effort has gone into determining important performance domains in general surgery through the development of the ACGME General Surgery Milestones. Through their application, the Milestones guide training programs to develop methods to measure and report trainee competence in each of sixteen important areas of performance. It follows that many programs, working in parallel, have arrived at distinct local evaluation processes, and the next step is to understand what these different methods are. Before an effective standardized toolkit can be built, we must assess what tools/strategies are currently being used by training programs to measure resident performance and how they fit within each of these domains.
In order to address this, we plan to conduct a survey-based assessment inventory of the evaluation process at all sites participating in the Variability In Trainee Autonomy and Learning in Surgery (VITALS) trial. We will synthesize these into a comprehensive picture of how training programs in general surgery are currently measuring trainee performance. This will identify gaps in our current assessment process with the eventual goal to build and validate a standardized assessment toolkit to be shared across all training programs.
Please email Dr. Kendrick for more information at This email address is being protected from spambots. You need JavaScript enabled to view it.