In Situ Simulation – Part 2: ED in situ simulation for QI at Kelowna General Hospital

This 2 part series was written by Jared Baylis, JoAnne Slinn, and Kevin Clark. 

Jared Baylis (@baylis_jared) is a PGY-4 and chief resident at the Interior Site of UBC’s Emergency Medicine residency program (@KelownaEM). He has an interest in simulation, medical education, and administration/leadership and is currently a simulation fellow through the Centre of Excellence for Simulation Education and Innovation in Vancouver, BC and a MMEd student through Dundee University.

JoAnne Slinn is a Registered Nurse, with a background in emergency nursing, and the simulation nurse educator at the Pritchard Simulation Centre in Kelowna. She recently completed her Masters of Nursing and has CNA certification in emergency nursing.

Kevin Clark (@KClarkEM) is the Site/Program Director for the UBC Emergency Medicine program in Kelowna. He completed a master’s degree in education with a focus on simulation back in the day when high fidelity simulation was new and sim fellowships weren’t yet a thing.

Welcome back to part 2 of our series on in situ simulation for quality improvement! Check out last months’ post for a deeper dive into the literature behind this concept. In this post, we will outline the vision, structure, participants, results, and lessons learned in the implementation of our ED in situ simulation program at Kelowna General Hospital (KGH).  

The Vision

KGH is a tertiary care community hospital serving the interior region of British Columbia.  Our emergency department (ED) sees in excess of 85,000 patient visits per year.  In 2013, we became a University of British Columbia distributed site for training Royal College emergency medicine residents. With this program came a responsibility to increase the academic activities in the department both for education and for team building and quality improvement (QI). Our aim with the program was:

  1.     Improve interprofessional collaboration.
  2.     Improve resuscitation team communication
  3.     Develop resident resuscitation leadership skills.
  4.     Educate emergency department professionals on medical expertise related to resuscitation.
  5.     Identify and select two quality improvement action items that arise within each resuscitation scenario.
  6.     Assess and respond to each QI action item in the interest of better patient care.
  7.     Educate participants and other department staff with regards to each QI action item in an effort to change process and behaviors.

From a departmental QI standpoint, we applied the “SMART” framework; specific, measurable, attainable, realistic, and time based.¹ Our goal, as stated above, was to select two QI action items that came up during the debrief following our simulation. Our nurse educator group follows-up on each of these items and reports back to the local ED network, pharmacy, or the ED manager depending on which is seen as most appropriate for the particular QI issue. This ensures our model remains sustainable over time. Follow up emails are sent out to “close the loop” with attendees and department staff after each session. Learnings from the simulations are also presented to the local ED network to share with smaller sites that do not have simulation opportunities.

The Structure

Each session includes one to two scenarios where a “patient” with a critical illness is resuscitated by the team. Both adult and pediatric cases have been run using high fidelity simulators and a variety of resuscitation topics. The cases are run over a 90-minute time-period once per month immediately prior to our departmental meeting. This encourages attendance and participation. The timing of in situ simulation also coincides with our residency program’s academic day further increasing attendance and participation. The resuscitation/trauma room in the KGH ED is used for these sessions. The program has been well received and was highlighted on the local Global News Channel as a public display of our QI initiative.

ED in situ 1

ED in situ simulation at KGH

The session begins with a pre-brief that includes brief introductions, general objectives, confidentiality, fidelity contract, and an outline for the session. This is followed by an orientation to the simulator, monitors, and equipment in the room. The scenario is then begun with a pre-hospital notification, bedside handover by paramedics, and then emergency department care ending with decision on disposition. The scenario is run in real time to maximize realism in terms of the time it takes to draw up and administer medications etc. This is followed by a debriefing session that takes in feedback from all team members as well as observers. This is led by a staff physician with experience in simulation debriefing. CanMEDS themes such as communication, collaboration, leadership and medical expertise are all discussed.²

Participants and Recruitment

Participants include emergency physicians, residents, nurses, respiratory therapists, pharmacists, paramedics, security, and students from the aforementioned groups. Participants are recruited with an email announcing the session one week prior, sign up lists on the educators’ door, and posters placed in the ambulance bay and paramedic stations. Cases are determined by the EM Residency Director in conjunction with the Simulation Fellow, ED Nurse Educators, and the Simulation Nurse Educator. The cases are distributed to the discipline leads 2-7 days prior to the session in order to prepare students and newer professionals that may be joining.

Our Results

There were a total of 65 participants when the program began in 2015, with an average of 16 participants/session. This grew to 130 total participants and an average of 19 participants/session in 2016. There was a further increase in 2017 to 213 total participants with a session average of 24 participants giving a total of 408 participants since program inception. The distribution of participant disciplines over the duration of the program is below:

Graph 1: ED In Situ Participant Data 2015 – 2017

sim pic

Feedback has been informal, but overwhelmingly positive. The ED nurse educators have found in-situ simulation to be one of the most valuable educational experiences for the department and have advocated for the sessions to be paid education time for the nurses. This has increased buy-in and participation. Paramedics have commented that it is time well spent, that they appreciate seeing what happens to the patient after they hand over care. They also remarked that this type of training will go a long way towards better inter-agency cooperation and understanding.

A variety of QI initiatives have been brought forward from these sessions. This has included better use of existing protocols, finding equipment that is poorly placed or expired, and determining better team-working processes similar to what was described in our literature review. One specific example of our QI initiatives was the development of a simulation case around our pediatric diabetic ketoacidosis protocol (that was still in draft form), running the case using the protocol, and then providing feedback on revisions including clarity on initial fluid replacement orders, additions to the initial blood work orders, and improvements to insulin pump delivery. Further QI initiatives that have resulted from this project are summarized below.

Table 2: QI action items and their resulting actions

CATEGORY ACTION
Team/Communication

1.     Delay in call for help

2.     Team members not speaking up when change in patient condition noticed

3.     Medication order confusion between physician, pharmacy, and nursing

4.     Not all team members hearing report from paramedics

1.     Reinforce calling for help early

2.     Fostering an environment that encourages input from all team members

3.     Reinforce importance of using closed-loop communication with medication orders

4.     Reinforce one paramedic report where everyone stops and listens (unless actively involved in CPR)

Equipment/Resources

1.     Difficulty in looking up medication information in resus. bay

2.     Unsafe needles for use with agitated patients in resus bay

3.     Expired Blakemore tubes in ED

4.     Unsure of PPE needed during potential Carfentanil exposure case

1.     Installed additional computer in resus. bay in order to better access information

2.     Auto-retractable needles made available in resus. bay

3.     New Blakemore tubes ordered

4.     Communicated the provincial Medical Health Officer recommendations for Carfentanil PPE to staff

Knowledge/Task

1.     Lack of knowledge of local use of DOAC antidote

2.     Uncertain of local process for initiating ECMO in ED

3.     Conflict over when to intubate a hemodynamically unstable patient

1.     Reviewed indications/contraindications, ordering information, and administration of Praxbind (idarucizumab)

2.     Reviewed team placement, patient transfer, and initiation of ECMO line in ED

3.     Reinforced resuscitation prior to intubation

PPE – Personal Protective Equipment

DOAC – Direct-Acting Oral Anticoagulants

ECMO – Extracorporeal Membrane Oxygenation

Successes, Lessons Learned, and Suggestions

In this article, we set out to describe our experience with regard to ED based in situ simulation as well as to outline the evidence for in situ simulation as a QI tool (part 1). We hope that this serves as encouragement to those of you who are thinking of getting such a program started at your institution. In reflecting on our process, we would offer these suggestions and lessons learned:

  1. Engage a team. It takes a team that is committed to the process to get this off the ground. Take the evidence to your team, gain support, and then begin your program.
  2. Start out with your goals/aims/objectives in mind so that you know what it is you’re trying to accomplish.
  3. “Buy in” is key. Try to structure your program so that it is convenient and so that attendance and participation is encouraged. For us this meant holding our in situ simulation on academic days for the residency program and immediately preceding our departmental meeting.
  4. Celebrate your successes with everyone involved to build a culture that values in situ simulation and quality improvement.
  5. Bring team members on board who are trained and experienced in simulation as debriefing a multidisciplinary simulation can introduce specific challenges. This is beyond the scope of this article but there are many good resources out there on debriefing including the PEARLS framework.³

We’ll close with the 10 tips that Spurr et al. described in their excellent article on how to get started on in situ simulation in an ED or critical care setting.

  1. Think about your location and equipment    
  2. Engage departmental leaders to support simulation
  3. Agree on your learning objectives for participants and the department
  4. Be a multiprofessional simulation program
  5. Strive for realism
  6. Start simple, then get complex
  7. Ensure everyone knows the rules and feels safe
  8. Link what you find in simulation to your clinical governance system
  9. The debrief is important: be careful, skillful, and safe
  10. Be mindful of real patient safety and perception

We would love to hear from you. If you have any questions or comments please feel free to comment on this post or to reach us by twitter (@baylis_jared, @KClarkEM, @KelownaEM).

References:

  1. Haughey D. Smart Goals [Internet]. [cited Dec 2017]. https://www.projectsmart.co.uk/smart-goals.php
  2. CanMEDS: Better standards, better physicians, better care [Internet]. [cited Dec 2017]. http://www.royalcollege.ca/rcsite/canmeds/canmeds-framework-e
  3. Eppich W, Cheng A. Promoting excellence and reflective learning in simulation (PEARLS): development and rationale for a blended approach to health care simulation debriefing. Simulation in Healthcare. 2015 Apr 1;10(2):106-15.
  4. Spurr J, Gatward J, Joshi N, Carley SD. Top 10 (+ 1) tips to get started with in situ simulation in emergency and critical care departments. Emerg Med J. 2016 Jul 1;33(7):514-6.

Electrical Storm

This case is written by Dr. Peter Dieckmann and Dr. Marcus Rall of the TuPASS Centre for Safety and Patient Simulation in Germany.

Why it Matters

Electrical Storm is a rare complication of a cardiac arrest. When it is present, the typical therapies for aborting VF are not sufficient. This case reviews the tailored management of this situation, including:

Clinical Vignette

“Arrest arriving in 1 minute. Doctor to resuscitation room STAT.

Paramedic report: “This is a 55 year old male we picked up at an office tower down the street. Apparently he was complaining of feeling unwell all morning and then collapsed at lunch. A colleague started CPR and we were called. The AED delivered 3 shocks. His colleagues say he’s healthy and they’re unsure about meds or allergies. His boss called his wife and she’s on her way.” CPR is ongoing.”

Case Summary

A 55 year-old male is brought to the emergency department with absent vital signs. He collapsed at his office after complaining of feeling unwell. CPR was started by a colleague and continued by EMS. He received 3 shocks by an AED. His downtime is approximately 10 minutes. The team is expected to perform routine ACLS care. When the patient remains in VF despite ACLS management, the team will need to consider specific therapies, such as iv beta blockade or dual sequential shock, in order to abort the electrical storm.

Download the case here: Electrical Storm

Cardiac U/S for the case found here:

(Ultrasound image courtesy of McMaster PoCUS Subspecialty Training Program)

ECG for the case found here:

(ECG source: https://lifeinthefastlane.com/ecg-library/anterior-stemi/)

CXR for the case found here:

Normal Post-Intubation CXR

(CXR source: https://emcow.files.wordpress.com/2012/11/normal-intubation2.jpg)

In Situ Simulation – Part 1: Quality Improvement Through Simulation

This 2 part series was written by Jared Baylis, JoAnne Slinn, and Kevin Clark. Part 1 is a review of the literature around in situ simulation for quality improvement and part 2 will detail the emergency department in situ simulation program at Kelowna General Hospital including successes, lessons learned, and suggestions for those of you considering starting an in situ simulation program in your centre.

Jared Baylis (@baylis_jared) is a PGY-4 and the chief resident at the Interior Site of UBC’s Emergency Medicine residency program (@KelownaEM). He has an interest in simulation, medical education, and administration/leadership and is currently a simulation fellow through the Centre of Excellence for Simulation Education and Innovation in Vancouver, BC and a MMEd student through Dundee University.

JoAnne Slinn is a Registered Nurse, with a background in emergency nursing, and the simulation nurse educator at the Pritchard Simulation Centre in Kelowna. She recently completed her Masters of Nursing and has CNA certification in emergency nursing.

Kevin Clark (@KClarkEM) is the Site/Program Director for the UBC Emergency Medicine program in Kelowna. He completed a master’s degree in education with a focus on simulation back in the day when high fidelity simulation was new and sim fellowships weren’t yet a thing.  

In situ simulation is a team-based training technique conducted in actual patient care areas using equipment and supplies from that area with people from the care team. (1,2) There have been an increasing number of studies published since 2011, the majority being since 2015, investigating the benefits of in situ simulation as a quality improvement (QI) modality. (1-20) These studies offer a fascinating glimpse into the world of potential that exists within in situ simulation. Here is a quote by Spurr et al. that eloquently describes the potential benefits of in-situ simulation: (19)

“In situ training takes simulation into the workplace. It allows teams to test their effectiveness in a controlled manner, to train for rare events and to interrogate departmental and hospital processes in real time and in real locations. It may also allow teams to uncover latent safety threats in their work environment.”

In this article, we will review recent literature surrounding in situ simulation as a QI tool as a preface to part 2 (next month) where we will describe our process of starting and maintaining an emergency department (ED) based in situ simulation program.

How can in-situ simulation be used for QI?

In the healthcare setting, QI is typically seen as systematic actions that result in measurable positive effects in health care services and/or patient outcomes. (21) There are several ways that in situ simulation can lead to improvement, all of which fall under the umbrella of QI. Previous studies have identified these as improvements in individual provider and/or team performance, identification of latent safety threats (more on this later), and improvement of systems. (11) We will go through several specific examples in the literature which were found by performing a librarian assisted literature search with search terms “in-situ”, “simulation” OR “simulation based education”, “emergency medicine”, and “quality improvement”. The search yielded 39 records of which 19 were excluded for lack of relevance. This left 20 records which were reviewed. The main themes of quality improvement using in situ simulation are described below.

Crisis Resource Management

Simply put, crisis resource management (CRM) speaks to the non-technical skills needed for excellent teamwork. (22) These, according to Carne et al., include knowing your environment, anticipating, sharing, and reviewing the plan, ensuring leadership and role clarity, communicating effectively, calling for help early, allocating attention wisely, and distributing the workload. (22)

Wheeler et al. ran standardized simulation scenarios twice per month on their inpatient hospital units. (1) The units were involved on a rotating basis which provided each unit with at least two in situ simulations per year. They noted 134 safety threats and knowledge gaps over the course of the 21-month study. These led to modification of systems but also provided a means to reinforce the use of assertive statements, role clarity, frequent updates regarding the plan, development of a shared mental model, and overcoming of authority gradients between team members.

Miller et al. had a similar CRM idea in mind with their observational study looking at actual trauma team activation during four different phases. (9) Phase one was pre-intervention, phase two was during a didactic-only intervention, phase three was during an in situ simulation intervention, and phase four was a post-intervention phase. They noted that the mean and median Clinical Teamwork Scale ratings for trauma team activations were highest during the in situ phase. Interestingly though, the scores returned to pre-intervention levels during the post-intervention phase implying that any sustained improvement in teamwork (CRM) is contingent on ongoing regular departmental in situ simulation.

Several other studies had a CRM focus in their research involving in situ simulation and all of them either demonstrated improvement in CRM capabilities or identified CRM issues that could be acted on later as a result of in situ simulation. (10-11, 13-14)

Rare Procedures

The most recent example of using in situ simulation for rare procedure assessment comes from a 2017 publication by Petrosoniak et al. (20) In this study, 20 emergency medicine residents were pretested for baseline proficiency at cricothyroidotomy. Following this, they were exposed to a two-part curriculum involving a didactic session followed by a task trainer session. The residents were then tested afterwards by an unannounced in situ simulation involving cricothyroidotomy while on shift in the emergency department. The mean performance time for cricothyroidotomy decreased by 59 seconds (p < 0.0001) after the two-part curriculum and the global rating scales improved significantly as well. This suggests that in situ simulation can be an effective way of assessing proficiency with rare procedures in the emergency department.  

Task Trainer 1

Task trainers such as this chest tube mannequin can be used to teach a procedure before assessing proficiency using in situ simulation

Latent Safety Threats

Latent safety threats can be thought of as “accidents waiting to happen”. (1) There is mounting evidence that multidisciplinary in situ simulation can identify latent safety threats and even reduce patient safety events. Patterson et al. found that after introducing standardized multidisciplinary in situ simulation to their large pediatric emergency department, they had a reduction in patient safety events from an average of 2-3 per year down to more than one thousand days without a patient safety event. (8) The same author group noted that their in situ simulation program itself was able to detect an average of 1 patient safety event per 1.2 in situ simulations consisting of a 10 minute scenario followed by a 10 minute debrief. (10) These latent safety threats were a mix of equipment failure and knowledge gaps regarding roles.

Petrosoniak et al. noted that, with rare procedures, it is not adequate to just teach an individual how to perform the procedure. (11) One must rather run the scenario in an in situ simulation setting to identify potential latent safety threats as well as other systems and teamwork related issues. (11)

An interesting point of view was highlighted by Zimmerman et al. who raised the idea that demonstrated improvements in patient safety through the use of in situ simulation can be used to justify the existence of an in situ program from an administrative standpoint. (14)

Overall, in situ simulation is better at detecting latent safety threats than traditional lab based simulation and it can improve patient safety without exposing patients to harm and with increased realism over lab based simulation. (16-18)

Systems Issues (e.g. equipment, stocking, labelling)

Systems issues have a lot of overlap with identification of latent safety threats, teamwork, and CRM. However, one notable study is worth reviewing here. Moreira et al. conducted a prospective, block randomized, crossover study in a simulated pediatric arrest scenario comparing use of prefilled, colour coded (by Broselow category) medication syringes with conventional drug administration. (5) They demonstrated compelling results showing that time to drug administration was reduced from 47 seconds in the control group to 19 seconds in the colour coded group. Notably there were 20 critical dosing errors in the control group compared to 0 in the colour coded group.

Testing Adherence to Guidelines

Traditionally, adherence to guidelines is measured by chart review or survey of healthcare practitioners with regard to their practice patterns. Two innovative studies recently considered in situ simulation as a way of assessing adherence to guidelines. Qian et al. ran an observational study at three tertiary care hospitals that see pediatric patients. (4) They introduced a simulation scenario at one of the centres and then compared, post simulation, adherence to their sepsis resuscitation checklist and found that compliance with the checklist was 61.7% in the hospital that ran simulation compared with 23.1% in the two hospitals that did not have simulation (p<0.01). (4) Kessler et al. used standardized in situ simulations to measure and compare adherence to pediatric sepsis guidelines in a series of emergency departments. (7) They did not test simulation as a means of increasing adherence to guidelines but rather used in situ simulation as a tool to determine their baseline adherence rates.

Assessing Readiness for Pediatric Patients and Disaster Preparedness

In many centres, acutely ill pediatric patients are, fortunately, a rarity. Abulebda et al. measured the Pediatric Readiness Score (PRS) pre- and post-implementation of an improvement program that included in situ simulations in a multidisciplinary (MD, RN, RT) emergency department setting. (3) They demonstrated an increase in PRS scores from 58.4 to 74.7 (p = 0.009). This suggests that in situ simulation can be effectively used to prepare emergency department care team for receipt of patient populations that may not be the norm for any given centre.

This can be extended to disaster preparation as well. Jung et al. described how high influxes of patients to the emergency department during disasters can contribute to increased medical errors and poorer patient outcomes. (6) They found that in situ simulation can improve communication as well as knowledge in disaster situations.

Testing New Facilities Prior to Opening

John Kotter outlined an 8 step process for leading change initiatives in his book Leading Change. (23) Step 5 is to “enable action by removing barriers”. (23) This involves removing barriers like inefficient processes and breaking down hierarchies so that work can occur across silos to generate an impact. (23) For anyone that has worked in a facility that has undergone a major renovation, or even an entirely new build, you will have experienced some of the inefficiencies and issues that surface. In situ simulation may provide a medium through which to discover these inefficiencies and to test new facilities before they open for regular use.

Geis et al. completed an observational study that used a series of in situ simulations to test a new satellite hospital and pediatric ED. (16) They had 81 participants (MD, RN, RT, EMS) involved in 24 in situ simulations over 3 months. They identified 37 latent safety threats of which 32 could be rectified prior to the building opening for regular use. These included equipment issues such as insufficient oxygen supply to resuscitate more than one patient at a time, resource concerns such as room layout preventing access by EMS and observation unit beds not fitting through resuscitation room doors, medication issues such as inadequate medication stations, and personnel concerns such as insufficient nursing staff to draw up meds.

Summary & What’s Next?

As you can see there are many useful quality improvement processes that can come directly from a robust ED in situ simulation program. It often takes well defined goals and objectives as well as institutional buy-in to run a successful in situ simulation program. With that in mind, look out for our next post which will detail our emergency department in situ simulation program at the Kelowna General Hospital including aims, structure, participants, results, and lessons learned!

We would love to hear from you. If you have any questions or comments please feel free to comment on this post or to reach us by twitter (@baylis_jared, @KClarkEM, @KelownaEM).

 

References:

  1. Wheeler DS, Geis G, Mack EH, LeMaster T, Patterson MD. High-reliability emergency response teams in the hospital: improving quality and safety using in situ simulation training. BMJ Qual Saf. 2013 Feb 1:bmjqs-2012.
  2. Yajamanyam PK, Sohi D. In situ simulation as a quality improvement initiative. Archives of Disease in Childhood-Education and Practice. 2015 Jun 1;100(3):162-3.
  3. Abulebda K, Lutfi R, Whitfill T, Abu‐Sultaneh S, Leeper KJ, Weinstein E, Auerbach MA. A collaborative in‐situ simulation‐based pediatric readiness improvement program for community emergency departments. Academic Emergency Medicine. 2017 Oct 4.
  4. Qian J, Wang Y, Zhang Y, Zhu X, Rong Q, Wei H. A Survey of the first-hour basic care tasks of severe sepsis and septic shock in pediatric patients and an evaluation of medical simulation on improving the compliance of the tasks. The Journal of emergency medicine. 2016 Feb 29;50(2):239-45.
  5. Moreira ME, Hernandez C, Stevens AD, Jones S, Sande M, Blumen JR, Hopkins E, Bakes K, Haukoos JS. Color-coded prefilled medication syringes decrease time to delivery and dosing error in simulated emergency department pediatric resuscitations. Annals of emergency medicine. 2015 Aug 31;66(2):97-106.
  6. Jung D, Carman M, Aga R, Burnett A. Disaster Preparedness in the Emergency Department Using In Situ Simulation. Advanced emergency nursing journal. 2016 Jan 1;38(1):56-68.
  7. Kessler DO, Walsh B, Whitfill T, Gangadharan S, Gawel M, Brown L, Auerbach M. Disparities in adherence to pediatric sepsis guidelines across a spectrum of emergency departments: a multicenter, cross-sectional observational in situ simulation study. The Journal of emergency medicine. 2016 Mar 31;50(3):403-15.
  8. Patterson MD, Geis GL, LeMaster T, Wears RL. Impact of multidisciplinary simulation-based training on patient safety in a paediatric emergency department. BMJ Qual Saf. 2012 Dec 1:bmjqs-2012.
  9. Miller D, Crandall C, Washington C, McLaughlin S. Improving teamwork and communication in trauma care through in situ simulations. Academic Emergency Medicine. 2012 May 1;19(5):608-12.
  10. Patterson MD, Geis GL, Falcone RA, LeMaster T, Wears RL. In situ simulation: detection of safety threats and teamwork training in a high risk emergency department. BMJ Qual Saf. 2012 Dec 1:bmjqs-2012.
  11. Petrosoniak A, Auerbach M, Wong AH, Hicks CM. In situ simulation in emergency medicine: moving beyond the simulation lab. Emergency Medicine Australasia. 2017 Feb 1;29(1):83-8.
  12. Siegel NA, Kobayashi L, Dunbar-Viveiros JA, Devine J, Al-Rasheed RS, Gardiner FG, Olsson K, Lai S, Jones MS, Dannecker M, Overly FL. In Situ Medical Simulation Investigation of Emergency Department Procedural Sedation With Randomized Trial of Experimental Bedside Clinical Process Guidance Intervention. Simulation in healthcare. 2015 Jun 1;10(3):146-53.
  13. Steinemann S, Berg B, Skinner A, DiTulio A, Anzelon K, Terada K, Oliver C, Ho HC, Speck C. In situ, multidisciplinary, simulation-based teamwork training improves early trauma care. Journal of surgical education. 2011 Dec 31;68(6):472-7.
  14. Zimmermann K, Holzinger IB, Ganassi L, Esslinger P, Pilgrim S, Allen M, Burmester M, Stocker M. Inter-professional in-situ simulated team and resuscitation training for patient safety: Description and impact of a programmatic approach. BMC medical education. 2015 Oct 29;15(1):189.
  15. Theilen U, Leonard P, Jones P, Ardill R, Weitz J, Agrawal D, Simpson D. Regular in situ simulation training of paediatric medical emergency team improves hospital response to deteriorating patients. Resuscitation. 2013 Feb 28;84(2):218-22.
  16. Geis GL, Pio B, Pendergrass TL, Moyer MR, Patterson MD. Simulation to assess the safety of new healthcare teams and new facilities. Simulation in Healthcare. 2011 Jun 1;6(3):125-33.
  17. Fan M, Petrosoniak A, Pinkney S, Hicks C, White K, Almeida AP, Campbell D, McGowan M, Gray A, Trbovich P. Study protocol for a framework analysis using video review to identify latent safety threats: trauma resuscitation using in situ simulation team training (TRUST). BMJ open. 2016 Nov 1;6(11):e013683.
  18. Ullman E, Kennedy M, Di Delupis FD, Pisanelli P, Burbui AG, Cussen M, Galli L, Pini R, Gensini GF. The Tuscan Mobile Simulation Program: a description of a program for the delivery of in situ simulation training. Internal and emergency medicine. 2016 Sep 1;11(6):837-41.
  19. Spurr J, Gatward J, Joshi N, Carley SD. Top 10 (+ 1) tips to get started with in situ simulation in emergency and critical care departments. Emerg Med J. 2016 Jul 1;33(7):514-6.
  20. Petrosoniak A, Ryzynski A, Lebovic G, Woolfrey K. Cricothyroidotomy In Situ Simulation Curriculum (CRIC Study): Training Residents for Rare Procedures. Simulation in Healthcare. 2017 Apr 1;12(2):76-82.
  21. US Department of Health and Human Service. Quality improvement [Internet]. 2011 April [cited Dec 2017]. https://www.hrsa.gov/sites/default/files/quality/toolbox/508pdfs/qualityimprovement.pdf
  22. Carne B, Kennedy M, Gray T. Crisis resource management in emergency medicine. Emergency Medicine Australasia. 2012 Feb 1;24(1):7-13.
  23. Kotter JP. Leading change. Harvard Business Press; 1996.

Intubation with Missing BVM

This case is written by Drs. Andrew Petrosoniak and Nicole Kester-Greene. Dr. Andrew Petrosoniak is an emergency physician and trauma team leader at St. Michael’s Hospital. He’s an assistant professor at the University of Toronto and an associate scientist at the Li Ka Shing Knowledge Institute.  Dr. Nicole Kester-Greene is a staff physician at Sunnybrook Health Sciences Centre in the Department of Emergency Services and an assistant professor in the Department of Medicine, Division of Emergency Medicine. She has completed a simulation educators training course at Harvard Centre for Medical Simulation and is currently Director of Emergency Medicine Simulation at Sunnybrook.

Why it Matters

Emergency medicine is about anticipating the worst and preparing for it . This case highlights this perfectly. In particular, it emphasizes:

  • The need to have a mental (or physical) checklist to ensure all necessary equipment is available at the bedside before starting a procedure
  • The complex nature of managing an immunocompromised patient with respiratory illness
  • The role for intubation in a hypoxic patient

Clinical Vignette

You are working in a large community ED. The triage nurse tells you that she has just put a patient in the resuscitation room. He is a 41-year old man with HIV. He is known to be non-compliant with his anti-retrovirals. He noticed progressive shortness of breath over 3-4 days and has had a dry cough for 10 days. His O2 sat was in the 80s at triage.

Case Summary

A 41-year old male with HIV (not on treatment) presents to the ED with a cough for 10 days, progressive dyspnea and fever. He is hypoxic at triage and brought immediately to the resuscitation room. He has transient improvement on oxygen but then has progressive worsening of his hypoxia and dyspnea. Intubation is required. The team needs to prepare for RSI and identify that the BVM is missing from the room prior to intubation.

Download the case here: Intubation with Missing BVM

CXR for the case found here:

PJP pneumonia

(CXR source: https://radiopaedia.org/cases/35823)

 

Simulation Solutions for Low Resource Settings

This review on simulation teaching in a low resource setting was written by Alia Dharamsi, a PGY 4 in Emergency Medicine at The University of Toronto and 2017 SHRED [Simulation, Health Sciences, Resuscitation for the Emergency Department] Fellow after her Toronto- Addis Ababa Academic Collaboration in Emergency Medicine (TAAAC-EM) elective. 

This past November I participated in an international elective in Addis Ababa, Ethiopia as a resident on the TAAAC-EM team. TAAAC-EM sends visiting faculty to teach and clinically mentor Ethiopian EM residents 3-4 times a year. Teaching trips cover a longitudinal, three-year curriculum through didactic teaching sessions, practical seminars, and bedside clinical supervision.

One of the areas of development identified by the residents was a yearning for more simulation exercises. As a budding simulationist and SHRED fellow, I was particularly keen to help contribute to the curriculum. Starting from the basics, we created two simulation curricula:

  • Rapid Cycle Deliberate Practice (RCDP) simulation exercises that covered basic Vfib, Vtach, PEA and asystolic arrests in short 5-minute simulation and debrief cycles and,
  • Managing airway exercises; a series of three cases addressing preparing for intubation, intubating a patient in respiratory extremis, and then troubleshooting an intubated and ventilated patient using the DOPES mnemonic

The local simulation centres were well relatively equipped however there were no high fidelity mannequins or elaborate set of monitors. We had to use an intubating mannequin head and torso for the airway simulation and a basic CPR mannequin for the RCDP exercise.

Picture1

Set up for the airway simulation exercises

For additional materials, we had to MacGyver some of the tools we needed to create these simulation scenarios, and from doing so we learned some valuable lessons. This post will outline some of the ways we created a higher sense of fidelity, even with low technology resources, and created high yield learning experiences for the residents.

Understand what actual resources are available to the trainees in the ED before you try to create a simulation exercise

It took a few weeks of working in the ED to really understand what resources are available to the staff and residents. For the most part there was no continuous saturation monitoring. X-rays are not typically done until after the patient is stabilized because the patient has to be taken out of the resuscitation room to the imaging department. Lastly, some medications that we use in Canada on a daily basis were simply not available. The first step to creating simulation in low resource settings is to understand the available resources.

Communication skills do not require a high technology environment; Neither do CPR or BVM skills!

Both simulation exercises focussed on team communication, closed loop communication, team preparation for interventions (like intubation), and team leadership. While our supplies were basic, the simulations lent themselves well to discussing improved communication methods in the ED during resuscitations. We also emphasized excellent quality CPR and reiterated basic bagging techniques. We can all use refreshers on the basics and this is one way me made the simulations fruitful for learners of all levels.

Picture2

GIFs make great rhythm generators

For these simulation sessions we did not consistently have access to a rhythm generator, so we downloaded GIFs to our phone of Vfib, Vtach, Aystole and normal sinus rhythm to display on our phones. This turned a partially functioning defibrillator into a monitor! We were also able to change the rhythm by picking a new GIF really easily.

Picture3.jpg

Wherever possible, add fidelity

While the technology was limited, the opportunity to bring human factors into the simulations were not. We used real bottles of medications, which helped the residents suspend some of their disbelief, and we encouraged them to verbalize the actual medication doses. We talked about safely labelling syringes so as not to mix which medication was in which syringe (sedation vs paralysis), and how to do a team time out before a critical intervention to ensure necessary supplies were available. We even simulated a broken laryngoscope by removing the battery to add a level of complexity to the case — if they didn’t check the laryngoscope ahead of time they wouldn’t have noticed.

Picture4

At its core, simulation should be fun

One of my most poignant memories from these simulation sessions is how much fun the residents had. We had to pause a few times because we were laughing so hard. These residents work extremely closely over their training, side by side and as a group for the majority of their on service time—they see more of each other than I feel like I’ve seen of my co-residents during my residency this far. I noticed that because of their extensive time together, they seem to have more personal relationships, and as such even in the ED they have more fun together. The residents all appreciated these sessions where they learned together, and really enjoyed each other’s company. Their joy refreshed and rejuvenated my love for simulation!

Simulation is an important teaching tool for learners in EM no matter where they are training. We take for granted our high tech sim labs, dedicated simulation curricula, and protected time to practice resuscitations and learn. Simulation offers the ability to make mistakes in a safe environment, to learn with our peers, and to develop an expertise that we can apply in the ED—something the residents in Addis Ababa really wanted to have as part of their ongoing curriculum. Applying my simulation training to a low resource setting has helped me grow as a simulationist and become pretty creative in how I approach resource limitations. I’m particularly grateful to the residents for not only being patient, keen, and enthusiastic as we worked through some of these challenges, but also for allowing me to take photos to post on this blog!

 

Anaphylaxis and Medication Error

This case is written by Dr. Kyla Caners. She is a staff emergency physician in Hamilton, Ontario and the Simulation Director of McMaster University’s FRCP-EM program. She is also one of the Editors-in-Chief here at EmSimCases.

Why it Matters

Anaphylaxis is a very common presentation to the ED. Knowing how to treat it expediently is essential. This case is designed to review common errors made by junior learners in the emergency department. In particular, it reviews:

  • The need to prioritize epinephrine above all other medications
  • The IM dosing of epinephrine
  • The need to understand the different concentrations of epinephrine available and how to avoid medication errors that occur as a result

Clinical Vignette

Report from EMS:

“This patient was recently prescribed Levofloxacin for a presumed pneumonia by his family MD. Approximately one hour after his first dose he developed a diffuse pruritic rash and felt acutely dyspneic. He denies any chest pain, syncope, fever or diaphoresis. He has not had Levofloxacin prior and there is no previous history of this. The highest SBP we could get was 90 by palp. Heart rate has been around 100. We’ve been unable to get an IV. Epi 0.5 IM x 1 has been given.”

Case Summary

A 59-year-old male presents to the ED with anaphylaxis. He has already received a dose of epinephrine by EMS. On arrival, he will be wheezing and hypotensive with angioedema. Learners will be expected to provide repeat dosing of epinephrine as well as to start an epinephrine infusion in order for the patient to improve. They will also be expected to prepare for intubation. To highlight common errors in anaphylaxis treatment, a nurse will delay giving epinephrine unless specifically instructed to give it before other medications. The nurse will also attempt to give the cardiac epinephrine, requiring the team leader to clarify proper dosing. Once an epinephrine infusion has started, the patient’s angioedema and breathing will improve.

Download the case here: Anaphylaxis

Validity – Starting with the Basics

This critique on validity and how it relates to simulation teaching was written by Alia Dharamsi, a PGY 4 in Emergency Medicine at The University of Toronto and 2017 SHRED [Simulation, Health Sciences, Resuscitation for the Emergency Department] Fellow.

When designing simulation exercises that will ultimately lead to the assessment and evaluation of a learner’s competency for a given skill, the validity of the simulation as a teaching tool should be addressed on a variety of levels. This is especially relevant when creating simulation exercises for competencies outside of the medical expert realm such as communication, team training and problem solving.

As a budding resuscitationist and simulationist, understanding validity is vital to ensuring that the simulation exercises that I create are actually measuring what they intend to measure, that is, they are valid (Devitt et al.).  As we look ahead to Competency Based Medical Education (CBME), it will become increasingly important to develop simulation exercises that are not only interesting and high-yield with respect to training residents in critical skills, but also have high validity with respect to reproducibility as well as translation of skills into real world resuscitation and patient care.

In order to better illustrate the various types of validity and how they can affect simulation design, I will present an example of an exercise I implemented as I was tasked with teaching a 5 year old to tie her shoelaces. In order to do so I taught her using a model, very similar to this one I found on Pinterest:

shoelacesWe first learned the rhyme, then used this template to practice over and over again. The idea behind using the model was to provide the reference of the poem right next to the shoes, but also to enlarge the scale of the shoes and laces, since her tiny feet meant tiny laces on shoes that were difficult for her to manipulte. Also, we could do this exercise at the table, which allowed us to be comfortable as we learned. At the end of the exercise, I gave her a “test” and asked her to tie the cardboard shoes to see if she remembered what we learned. While there was no rigorous evaluation

scheme, the standard was that she should be able to tie the knot to completion (competency), leading to two loops at the end.

I applied my simulation learning to this experience to asses the validity of this exercise in improving her ability to tie her laces. The test involved her tying these laces by herself without prompting.

Face validity: Does this exercise appear to test the skills we want it to?

Very similar to “at face value,” face validity is how much a test or exercise looks like it is going to measure what it intends to measure.  This can be assessed by an “outsider” perspective, like her mom if she feels that this test could measure her child’s ability to tie a shoe. Whether this test works or not is not the concern of face validity, rather it is whether it looks like it will work (Andale). Her mom thought this exercise would be useful in learning how to tie shoes, so face validity was achieved.

Content validity: Does the content of this test or exercise reflect the knowledge the learner needs to display? 

Content validity is the extent to which the content in the simulation exercise is relevant to what you are trying to evaluate (Hall, Pickett and Dagnone). Content validity requires an understanding of the content required to either learn a skill or perform a task. In Emergency Medicine, content validity is easily understood when considering a simulation exercise designed to teach learners to treat a Vfib arrest—the content is established by the ACLS guidelines, and best practices have been clearly laid out. For more nebulous skill sets (communication, complex resuscitations, rare but critical skills like bougie assisted cricothyroidotomies, problem solving, team training), the content is not as well defined, and may require surveys from experts, panels, and reviews by independent groups (Hall, Pickett and Dagnone). For my shoelace tying learner, the content was defined as being a single way to tie her shoelaces, however it did not include the initial lacing of the shoes or how to tell which shoe is right or left, and most importantly, the final test did not include these components. Had I tested her on lacing or appropriately choosing right or left, I would have not had content or face validity. This speaks to choosing appropriate objectives for a simulation exercise—objectives are the foundation upon which learners develop a scaffolding for their learning. If instructors are going to use simulation to evaluate learners, the objectives will need to clearly drive the content, and in turn the evaluation.

Construct Validity: Is test structured in a way that actually measures what it claims to?

In short, construct validity is assessing if you are measuring what you intend to measure.

My hypothesis for our exercise was that any measurable improvement in her ability to tie her shoelaces would be attributable to the exercise, and that with this exercise she would improve her ability to complete the steps required to tie her shoelaces. At the beginning of the shoelace tying exercise, she could pick up the laces, one in each hand, and then looked at me mostly blankly for the next steps. At the end of the exercise and for the final “test,” she was able to hold the laces and complete the teepee so it’s “closed tight” without any prompting. The fact that she improved is evidence to support the construct, however construct validity is an iterative process and requires different forms of evaluation to prove the construct. To verify construct validity, other tests with similar qualities can be used. For this shoelace tying exercise, we might say that shoelace tying is a product of fine motor dexterity and fine motor dexterity theory states that as her ability to perform other dexterity based exercises (tying a bow, threading beads onto a string) improves, so would her performance in her test. To validate our construct, we could they perform the exercise over time and see if her performance improves as her motor skills develop, or compare her performance on the test to an older child/adult who would have better motor skills and would perform better on the test.

External validity: Can the results of this exercise or study be generalized to other populations or settings, and if so, which ones?

With this shoelace tying exercise, should the results be tested and a causal relationship be established between this exercise and ability to tie shoes, then the next step would be to see if the results can be generalized to other learners in different environments. This would require further study and a careful selection of participant groups and participants to reduce bias. This would also be an opportunity to vary the context of the exercise, level of difficulty, and to introduce variables to see if the cardboard model could be extrapolated to actual shoe tying.

 Internal validity: Is there another cause that explain my observation?

With this exercise, her ability to tie laces improved over the course of the day. In order to measure internal validity, it is important to assess if any improvement or change in behaviour could be attributed to another external factor (Shuttleworth). For this exercise, there was only one instructor and one student in a consistent environment. If we had reproduced this exercise using a few novice shoelace tiers and a few different instructors it may add confounders to the experiment which would then make it less clear to assess if improvements in shoelace tying are attributed to the exercise or the instructors. Selection bias can also affect internal validity— for example selecting participants who were older (and therefore had more motor dexterity to begin with) or who had previous shoelace tying training would likely affect the outcome. For simulation exercises, internal validity can be confounded by multiple instructors, differences in the mannequin or simulation lab, as well as different instructor styles which may lead to differences in learning. Overcoming these challenges to internal validity is partly achieved by robust design, but also by repeating the exercise to ensure that the outcomes are reproducible across a wider variety of participants than the sample cohort.

There are many types of validity, and robust research projects require an understanding of validity to guide the initial design of a study or exercise. Through this exercise in validity I was able to better take the somewhat abstract concepts of face validity and internal validity and ground them into practice through a relatively simple exercise. I have found that doing this has helped me form a foundation in validity theory, which I can now expand into evaluating the simulation exercises that I create.

 

REFERENCES

1) Andale. “Face Validity: Definition and Examples.” Statistics How To. Statistics How to 2015. Web. October 20 2017.

2) Devitt, J. H., et al. “The Validity of Performance Assessments Using Simulation.” Anesthesiology 95.1 (2001): 36-42. Print.

3) Hall, A. K., W. Pickett, and J. D. Dagnone. “Development and Evaluation of a Simulation-Based Resuscitation Scenario Assessment Tool for Emergency Medicine Residents.” CJEM 14.3 (2012): 139-46. Print.

4) Shuttleworth, M. (Jul 5, 2009). Internal Validity. Retrieved Oct 26, 2017 fromExplorable.com: https://explorable.com/internal-validity

5) Shuttleworth, M. (Aug 7, 2009). External Validity. Retrieved Oct 26, 2017 from Explorable.com: https://explorable.com/external-validity

Chest Pain on the Ward

This case is written by Dr. Kyla Caners. She is a staff emergency physician in Hamilton, Ontario and the Simulation Director of McMaster University’s FRCP-EM program. She is also one of the Editors-in-Chief here at EmSimCases.

Why it Matters

When learners are transitioning to residency, they are often fearful of what feels like a sudden increase in responsibility. A big fear that is common among trainees is the idea that they might be left alone to treat something urgent or beyond their skill level. This case was designed to help alleviate some of those fears. The debriefing should focus on local resources available to learners when they feel alone in the middle of the night. The point of the case is to show them they’re not alone. In particular, this case highlights:

  • How to handle a call from the ward about a patient in distress (get things started while on your way to the ward!)
  • The work-up for an admitted patient with chest pain (and how treatment can change quickly!)
  • The senior-level resources available to learners overnight (ICU outreach, anesthesia, the senior resident, their attending over the phone, etc) and when learners should make certain to call their superiors

A Special Note

To make this case particularly realistic, we recommend using your local charting system to create a patient note that can be given to learners. If you use an EMR, then print out what an admission note would look like. If you use paper charting, then handwrite an admission note for learners to review!

Clinical Vignette

You are the junior medical resident on call overnight covering for a team of patients you do not know. You get a page from a nurse on the ward: “one of my patients is having chest pain…can you come and see him?”

*Note: the first part of this scenario is actually done best over the phone. Have the learner stand outside the room and call them on their cell phone.

Case Summary

The case will begin with a phone call from the bedside nurse for a patient on the ward that the resident on call is covering. The resident will then arrive at the bedside to find a patient complaining of significant chest pain. The patient will be in some respiratory distress due to CHF. The patient’s initial ECG will show new T-wave inversion. The patient will prompt regarding ongoing chest pain and his ECG will evolve to show an anterolateral STEMI. The team is expected to recognize the evolving STEMI and initiate treatment and cath lab activation.

Download the case here: Chest Pain on the Ward

“Old” ECG for the case found here:

(ECG source: https://lifeinthefastlane.com/ecg-library/normal-sinus-rhythm/)

Initial ECG on the ward found here:

001 Anterior TWI

(ECG source: http://hqmeded-ecg.blogspot.ca/2015/12/lvh-with-anterior-st-elevation-when-is.html)

Repeat ECG on the ward found here:

003 anterolateral STEMI

(ECG source: https://lifeinthefastlane.com/ecg-library/anterior-stemi/)

CXR for the case found here:

(CXR source: https://www.med-ed.virginia.edu/courses/rad/cxr/web%20images/into-chf.jpg)

Simulation-Based Assessment

This critique on simulation-based assessment was written by Alice Gray, a PGY 4 in Emergency Medicine at The University of Toronto and 2017 SHRED [Simulation, Health Sciences, Resuscitation for the Emergency Department] Fellow.

You like to run simulations.  You have become adept at creating innovative and insightful simulations. You have honed your skills in leading a constructive debrief.  So what’s next? You now hope to be able to measure the impact of your simulation.  How do you design a study to measure the effectiveness of your simulation on medical trainee education?

There are numerous decisions to make when designing a sim-based assessment study. For example, who is included in the study?  Do you use direct observation or videotape recording or both? Who will evaluate the trainees? How do you train your raters to achieve acceptable inter-rater reliability? What are you assessing – team-based performance or individual performance?

One key decision is the evaluation tool used for assessing participants.  A tool ideally should:

  • Have high inter-rater reliability
  • Have high construct validity
  • Be feasible to administer
  • Be able to discriminate between different level of trainees

Two commonly used sim-based assessment tools are Global Rating Scales (GRS) and Checklists.  Here, these tools will be compared to evaluate their role for the assessment of simulation in medical education.

Global Rating Scales vs Checklists

GRS are tools that allow raters to judge participants’ overall performance and/or provide an overall impression of performance on specific sub-tasks.1   Checklists are lists of specific actions or items that are to be performed by the learner.  Checklists prompt raters to attest to directly observable actions. 1

Many GRS ask raters to utilize a summary to rate overall ability or to rate a “global impression” of learners.  This summary item can be a scale from fail to excellent, as in Figure 1.2 Another GRS may assess learners’ abilities to perform a task independently by having raters mark learners on a scale from “not competent” to “performs independently”.  In studies, the overall GRS has shown to be more sensitive at discriminating between level of experience of learners than checklists.3,4,5  Other research has shown that GRS demonstrate superior inter-item and inter-station reliability and validity to checklists.16,7,8 GRS can be used across multiple tasks and may be able to better measure expertise levels in learners. 1

 Some of the pitfalls of GRS are that they can be quite subjective.  They also rely on “expert” opinion in order to be able to grade learners effectively and reliably.

GRS

Figure 1: assessment tool used by Hall et al in their study evaluating a simulation-based assessment tool for emergency medical residents using both a checklist and global assessment rating.2

Checklists, on the other hand, are thought to be less subjective, though some studies may argue this is false as the language used in the checklist can be subjective.10 If designed well, however, checklists provide clear step-by-step outlines for raters to mark observable behaviours.  A well-designed checklist would be easy to administer so any teacher can use it (and not rely on experts to administer the tool).  By measuring defined and specific behaviours, checklists may help to guide feedback to learners.

However, some pitfalls of checklists are that high scores have not been shown to rule out “incompetence” and therefore may not be accurate at evaluating skill level. 9.10 Checklists may also comment on multiple areas of competence, which may attribute to lower-item reliability.1  Other studies have found that despite checklists being theoretically easy to use, the inter-rater reliability was consistently low.9   However, a systematic review of the literature found that checklists performed similarly high to GRS in terms of inter-rater reliability. 1

 

TABLE 1: Pros and Cons of Global Rating Scales and Checklists
 

                  +

                   –

Global Rating Scores

 

§  Higher internal reliability

§  More sensitive in defining level of training

§  Higher inter-station reliability and generalizability

 

§  Less precise

§  Subjective rater judgement and decision making

§  May require experts  or more rater training in order to rate learners

Checklists

 

§  Good for the measurement of defined steps or specific components of performance

§  Possible more objective

§  Easy to administer

§  Easy to identify define actions for learner feedback

§  Possibly lower reliability

§  Requires dichotomous ratings, possibly resulting in loss of information

 

 

 

 

 

Conclusion

With the move towards competency-based education, the use of simulation will play an important role in evaluating learners’ competencies.  Simulation-based assessments allows for direct evaluation of individuals knowledge, technical skills, clinical reasoning, and teamwork. Assessment tools play an important component of medical education.

An optimal assessment tool for evaluating simulation would be reliable, valid, comprehensive, and allow for discrimination between learners abilities.  Global Rating Scales and Checklists each have their own advantages and pitfalls and each may be used for the assessment of specific outcome measures.  Studies suggest that GRS have some important advantages over checklists, yet the evidence for checklists appears slightly improved than previously thought.  Yet, whichever tool is chosen, it is critical to design and test the tool to ensure that it appropriately assesses the desired outcome.   If feasible, using both a Checklist and Global Rating Scale would help to optimize the effectiveness of the sim-based education.

 

REFERENCES

1           Ilgen JS et al. A systematic review of validity evidence for checklists versus global rating scales in simulation-based assessment. Med Educ. 2015 Feb;49(2):161-73

2           Hall AK. Development and evaluation of a simulation-based resuscitation scenario assessment tool for emergency medicine residents. CJEM. 2012 May;14(3):139-46

3           Hodges B et al.  Analytic global OSCE ratings are sensitive to level of training. Med Educ. 2003;37:1012–6

4           Morgan PJ et al. A comparison of global ratings and checklist scores from an undergraduate assessment using an anesthesia simulator. Acad Med. 2001;76(10) 1053-5

5           Tedesco MM et al. Simulation-based endovascular skills assessment: the future of credentialing? J Vasc Surg. 2008 May;47(5):1008-11

6           Hodges B at al. OSCE checklists do not capture increasing levels of expertise. Acad Med. 1999;74:1129–1134

7           Hodges B and McIlroy JH. Analytic global OSCE ratings are sensitive to level of training. Med Educ. 2003;37:1012–1016

8           Regehr G et al. Comparing the psychometric properties of checklists and global rating scales for assessing performance on an OSCE-format examination. Acad Med. 1998;73:993-7

9           Walsak A et al. Diagnosing technical competence in six bedside procedures: comparing checklists and a global rating scale in the assessment of resident performance. Acad Med. 2015 Aug;90(8):1100-8

10          Ma IW et al. Comparing the use of global rating scale with checklists for the assessment of central venous catheterization skills using simulation. Adv Health Sci Educ Theory Pract. 2012;17:457–470

 

PE with Bleeding

This case is written by Dr. Donika Orlich. She is a staff physician practising in the Greater Toronto Area. She completed both her Emergency Medicine training and Clinician Educator Diploma at McMaster University.

Why it Matters

Many simulation cases that deal with pulmonary embolism seem to focus on the decision to administer thrombolytics (usually upon a patient’s arrest). This case is different. While the team must administer thrombolytics to a patient with known pulmonary embolism, the catch is that they must then also recognize shock as a result of intra-abdominal bleeding. As a result, the case highlights the following:

  • The dose of thrombolytics to be used in the context of cardiac arrest
  • The importance of an approach to undifferentiated shock after ROSC. (It’s not all cardiogenic!)
  • That bleeding is a complication of thrombolysis. This is drilled into our brains as the major complication, but somehow it is diagnostically challenging to recognize.

Clinical Vignette

You are called urgently to the bedside of a patient who is in the Emergency Department awaiting medicine consultation. Your colleague saw her earlier. She is 63 years old and has a CT-confirmed pulmonary embolism. She had presented with shortness of breath on exertion in the context of a recent hysterectomy 4 weeks ago. She has been stable in the ED until she got up to go to the bathroom and suddenly developed severe shortness of breath.

Case Summary

A 63-year-old female is in the Emergency Department awaiting internal medicine consultation for a diagnosed pulmonary embolism. She suddenly becomes very short of breath while walking to the bathroom and the team is called to assess. The patent will then arrest, necessitating thrombolysis. After ROSC, she will stabilize briefly but then develop increasing vasopressor requirements. The team will need to work through the shock differential diagnosis and recognize free fluid in the abdomen as a complication of thrombolysis requiring surgical consultation and transfusion.

Download the case here: PE with Bleeding

ECG for the case found here:

Massive PE ECG

(ECG source: https://lifeinthefastlane.com/ecg-library/pulmonary-embolism/)

Initial CXR for the case found here:

normal female CXR radiopedia

(CXR source: https://radiopaedia.org/cases/normal-chest-radiograph-female-1)

Post-intubation CXR for the case found here:

normal-intubation2

(CXR source: https://emcow.files.wordpress.com/2012/11/normal-intubation2.jpg)

Pericardial ultrasound for the case found here:

Normal lung ultrasound for the case found here:

Abdominal free fluid ultrasound for the case found here:

RUQ FF

(All ultrasound images are courtesy of McMaster PoCUS Subspecialty Training Program.)