Limiting Gender Bias in Simulation Assessment

Today’s piece is written by Dr. Lall. She is an Associate Professor and Associate Residency Director of Emergency Medicine at Emory University in Atlanta, GA. She is also the current president of the Academy for Women in Academic Emergency Medicine. Dr. Lall’s research focuses include physician wellness and gender bias and inequity in medicine. The following is a summary of her recent publication on this issue.

You can find the publication here:

Jeffrey N. SiegelmanMichelle LallLindsay LeeTim P. MoranJoshua Wallenstein, and Bijal Shah (2018) Gender Bias in Simulation-Based Assessments of Emergency Medicine Residents. Journal of Graduate Medical Education: August 2018, Vol. 10, No. 4, pp. 411-415.

Background:

There is a paucity of studies on gender differences in milestone assessment. One recent large multi-site cohort study of EM residents evaluated bias in end-of shift evaluations and found a significant bias based on resident gender (Dayal A et al, 2017).  Shift evaluations usually represent subjective assessments and residents are evaluated only on cases seen during a particular shift, resulting in considerable variation with respect to which competencies are assessed across residents and rated by faculty. Simulation allows for a more structured, consistent evaluation environment in which residents can be tested on identical clinical problems, and in which specific competencies can be assessed. We hypothesized that simulation, being a more objective assessment tool, may mitigate gender disparities in resident assessment.

In our three year experience with biannual milestone-based simulation assessments of all our EM residents, no significant gender bias was observed in contrast to other forms of resident assessments, such as end-of-shift evaluations.

Tips for SIM Educators:

  1. Training the standardized patient is key to successful simulation assessment.
    1. Pilot test the scenarios to ensure the case plays as expected and appropriately elicits the opportunity for the resident to perform the desired critical behaviors.
    2. Evaluate for potential bias introduced by the standardized patient script or actions as the scenario plays out.
    3. Ensure that standardized patient responses are the same every time.
      1. Same response in the same tone of voice with the same facial expressions whether the physician is male or female.
    4. Standardized patient script cues should be written with binary language.
      1. If the resident does not introduce themself to the patient, prompt the resident with “Doctor, what is your name?”
      2. Avoid language like miss, ma’am or sir
  2. Education and training of the rater is of critical importance.
    1. Raters should be instructed that evaluation in these cases is not subjective.  Evaluation is binary and based on observable behavior only.
  3. Convert milestone language into binary, observable behaviors
    1. Assessment items should avoid language that may introduce bias including subjective assessments.
      1. Agenic adjectives: typically used to describe men and when used to describe women carry a negative connotation.  Examples include assertive, autonomous, independent, confident, intellectual.
      2. Communal adjectives: typically used to describe women and when not demonstrated by women carry a negative connotation.  Examples include kind, compassionate, sympathetic, warm, helpful.
    2. Focus on action based assessment items, for example:
      1. Resident introduced themself to the patient
      2. Resident updated the family using lay terminology
      3. Resident ordered magnesium without prompting

Sim Checklist 3

Checklist for Limiting Bias in Simulation Assessment

  • Standardize the Scenario
    • Standardized patient/Confederate scripting and training is crucial
    • Simulation operator training
    • Pilot the case
  • Create an Objective Rating Tool
    • Focus on observable behaviors rather than subjective assessments
      • Observed/ Not Observed/ Unable to Assess
    • Train the raters
    • Use language that avoids bias
  • Monitor for bias
    • Analyze data after an assessment for validity evidence, reliability, and evidence of bias
    • Make adjustments to the case as needed

Learner-Consultant Communication

This case was written by Dr. Jared Baylis. Jared is currently a PGY-4 in emergency medicine at UBC (Interior Site – Kelowna, BC) and is completing a simulation fellowship in Vancouver, BC.

Twitter – @baylis_jared + @KelownaEM

Why It Matters

Referral-consultant interactions occur with regularity in the emergency department. These interactions are critically important to safe and effective patient care. Several frameworks have been developed for teaching learners how to communicate during a consultation including the 5C, PIQUED, and CONSULT models. This case allows simulation educators to incorporate whichever consultation framework they prefer into a simulation scenario that allows deliberate practice of the consultation process.

Clinical Vignette

You are a junior resident working in a tertiary care centre and you are asked to see a 58-year-old female patient who was sent in from the cancer centre. She is known to have metastatic non-small-cell lung cancer and has been increasingly dyspneic with postural pre-syncope over the last few days. Her history is significant for a previous malignant pericardial effusion that was drained therapeutically a few months ago.

Case Summary

In this case, learners will be expected to recognize that this 58-year-old female patient with metastatic non-small-cell lung cancer has tamponade physiology secondary to a malignant pericardial effusion. The patient will stabilize somewhat with a gentle fluid bolus but the learners will be expected to urgently consult cardiology or cardiac/thoracic surgery (depending on the centre) for a pericardiocentesis and/or pericardial window.

Download the case here: Learner-Consultant Communication

Checklists for 5C, PIQUED, and CONSULT frameworks: Consult Framework Checklists

FOAMed article on 5C framework: 5C CanadiEM

FOAMed article on PIQUED framework: PIQUED CanadiEM

ECG for the case found here:

ECG

(ECG Source: https://lifeinthefastlane.com/ecg-library/basics/low-qrs-voltage/)

CXR for the case found here:

CXR

(CXR Source: https://radiopaedia.org)

POCUS for the case found here:

 

(Ultrasound Source: https://www.youtube.com/watch?v=qAlU8qhC1cU)

In Situ Simulation – Part 2: ED in situ simulation for QI at Kelowna General Hospital

This 2 part series was written by Jared Baylis, JoAnne Slinn, and Kevin Clark. 

Jared Baylis (@baylis_jared) is a PGY-4 and chief resident at the Interior Site of UBC’s Emergency Medicine residency program (@KelownaEM). He has an interest in simulation, medical education, and administration/leadership and is currently a simulation fellow through the Centre of Excellence for Simulation Education and Innovation in Vancouver, BC and a MMEd student through Dundee University.

JoAnne Slinn is a Registered Nurse, with a background in emergency nursing, and the simulation nurse educator at the Pritchard Simulation Centre in Kelowna. She recently completed her Masters of Nursing and has CNA certification in emergency nursing.

Kevin Clark (@KClarkEM) is the Site/Program Director for the UBC Emergency Medicine program in Kelowna. He completed a master’s degree in education with a focus on simulation back in the day when high fidelity simulation was new and sim fellowships weren’t yet a thing.

Welcome back to part 2 of our series on in situ simulation for quality improvement! Check out last months’ post for a deeper dive into the literature behind this concept. In this post, we will outline the vision, structure, participants, results, and lessons learned in the implementation of our ED in situ simulation program at Kelowna General Hospital (KGH).  

The Vision

KGH is a tertiary care community hospital serving the interior region of British Columbia.  Our emergency department (ED) sees in excess of 85,000 patient visits per year.  In 2013, we became a University of British Columbia distributed site for training Royal College emergency medicine residents. With this program came a responsibility to increase the academic activities in the department both for education and for team building and quality improvement (QI). Our aim with the program was:

  1.     Improve interprofessional collaboration.
  2.     Improve resuscitation team communication
  3.     Develop resident resuscitation leadership skills.
  4.     Educate emergency department professionals on medical expertise related to resuscitation.
  5.     Identify and select two quality improvement action items that arise within each resuscitation scenario.
  6.     Assess and respond to each QI action item in the interest of better patient care.
  7.     Educate participants and other department staff with regards to each QI action item in an effort to change process and behaviors.

From a departmental QI standpoint, we applied the “SMART” framework; specific, measurable, attainable, realistic, and time based.¹ Our goal, as stated above, was to select two QI action items that came up during the debrief following our simulation. Our nurse educator group follows-up on each of these items and reports back to the local ED network, pharmacy, or the ED manager depending on which is seen as most appropriate for the particular QI issue. This ensures our model remains sustainable over time. Follow up emails are sent out to “close the loop” with attendees and department staff after each session. Learnings from the simulations are also presented to the local ED network to share with smaller sites that do not have simulation opportunities.

The Structure

Each session includes one to two scenarios where a “patient” with a critical illness is resuscitated by the team. Both adult and pediatric cases have been run using high fidelity simulators and a variety of resuscitation topics. The cases are run over a 90-minute time-period once per month immediately prior to our departmental meeting. This encourages attendance and participation. The timing of in situ simulation also coincides with our residency program’s academic day further increasing attendance and participation. The resuscitation/trauma room in the KGH ED is used for these sessions. The program has been well received and was highlighted on the local Global News Channel as a public display of our QI initiative.

ED in situ 1

ED in situ simulation at KGH

The session begins with a pre-brief that includes brief introductions, general objectives, confidentiality, fidelity contract, and an outline for the session. This is followed by an orientation to the simulator, monitors, and equipment in the room. The scenario is then begun with a pre-hospital notification, bedside handover by paramedics, and then emergency department care ending with decision on disposition. The scenario is run in real time to maximize realism in terms of the time it takes to draw up and administer medications etc. This is followed by a debriefing session that takes in feedback from all team members as well as observers. This is led by a staff physician with experience in simulation debriefing. CanMEDS themes such as communication, collaboration, leadership and medical expertise are all discussed.²

Participants and Recruitment

Participants include emergency physicians, residents, nurses, respiratory therapists, pharmacists, paramedics, security, and students from the aforementioned groups. Participants are recruited with an email announcing the session one week prior, sign up lists on the educators’ door, and posters placed in the ambulance bay and paramedic stations. Cases are determined by the EM Residency Director in conjunction with the Simulation Fellow, ED Nurse Educators, and the Simulation Nurse Educator. The cases are distributed to the discipline leads 2-7 days prior to the session in order to prepare students and newer professionals that may be joining.

Our Results

There were a total of 65 participants when the program began in 2015, with an average of 16 participants/session. This grew to 130 total participants and an average of 19 participants/session in 2016. There was a further increase in 2017 to 213 total participants with a session average of 24 participants giving a total of 408 participants since program inception. The distribution of participant disciplines over the duration of the program is below:

Graph 1: ED In Situ Participant Data 2015 – 2017

sim pic

Feedback has been informal, but overwhelmingly positive. The ED nurse educators have found in-situ simulation to be one of the most valuable educational experiences for the department and have advocated for the sessions to be paid education time for the nurses. This has increased buy-in and participation. Paramedics have commented that it is time well spent, that they appreciate seeing what happens to the patient after they hand over care. They also remarked that this type of training will go a long way towards better inter-agency cooperation and understanding.

A variety of QI initiatives have been brought forward from these sessions. This has included better use of existing protocols, finding equipment that is poorly placed or expired, and determining better team-working processes similar to what was described in our literature review. One specific example of our QI initiatives was the development of a simulation case around our pediatric diabetic ketoacidosis protocol (that was still in draft form), running the case using the protocol, and then providing feedback on revisions including clarity on initial fluid replacement orders, additions to the initial blood work orders, and improvements to insulin pump delivery. Further QI initiatives that have resulted from this project are summarized below.

Table 2: QI action items and their resulting actions

CATEGORY ACTION
Team/Communication

1.     Delay in call for help

2.     Team members not speaking up when change in patient condition noticed

3.     Medication order confusion between physician, pharmacy, and nursing

4.     Not all team members hearing report from paramedics

1.     Reinforce calling for help early

2.     Fostering an environment that encourages input from all team members

3.     Reinforce importance of using closed-loop communication with medication orders

4.     Reinforce one paramedic report where everyone stops and listens (unless actively involved in CPR)

Equipment/Resources

1.     Difficulty in looking up medication information in resus. bay

2.     Unsafe needles for use with agitated patients in resus bay

3.     Expired Blakemore tubes in ED

4.     Unsure of PPE needed during potential Carfentanil exposure case

1.     Installed additional computer in resus. bay in order to better access information

2.     Auto-retractable needles made available in resus. bay

3.     New Blakemore tubes ordered

4.     Communicated the provincial Medical Health Officer recommendations for Carfentanil PPE to staff

Knowledge/Task

1.     Lack of knowledge of local use of DOAC antidote

2.     Uncertain of local process for initiating ECMO in ED

3.     Conflict over when to intubate a hemodynamically unstable patient

1.     Reviewed indications/contraindications, ordering information, and administration of Praxbind (idarucizumab)

2.     Reviewed team placement, patient transfer, and initiation of ECMO line in ED

3.     Reinforced resuscitation prior to intubation

PPE – Personal Protective Equipment

DOAC – Direct-Acting Oral Anticoagulants

ECMO – Extracorporeal Membrane Oxygenation

Successes, Lessons Learned, and Suggestions

In this article, we set out to describe our experience with regard to ED based in situ simulation as well as to outline the evidence for in situ simulation as a QI tool (part 1). We hope that this serves as encouragement to those of you who are thinking of getting such a program started at your institution. In reflecting on our process, we would offer these suggestions and lessons learned:

  1. Engage a team. It takes a team that is committed to the process to get this off the ground. Take the evidence to your team, gain support, and then begin your program.
  2. Start out with your goals/aims/objectives in mind so that you know what it is you’re trying to accomplish.
  3. “Buy in” is key. Try to structure your program so that it is convenient and so that attendance and participation is encouraged. For us this meant holding our in situ simulation on academic days for the residency program and immediately preceding our departmental meeting.
  4. Celebrate your successes with everyone involved to build a culture that values in situ simulation and quality improvement.
  5. Bring team members on board who are trained and experienced in simulation as debriefing a multidisciplinary simulation can introduce specific challenges. This is beyond the scope of this article but there are many good resources out there on debriefing including the PEARLS framework.³

We’ll close with the 10 tips that Spurr et al. described in their excellent article on how to get started on in situ simulation in an ED or critical care setting.

  1. Think about your location and equipment    
  2. Engage departmental leaders to support simulation
  3. Agree on your learning objectives for participants and the department
  4. Be a multiprofessional simulation program
  5. Strive for realism
  6. Start simple, then get complex
  7. Ensure everyone knows the rules and feels safe
  8. Link what you find in simulation to your clinical governance system
  9. The debrief is important: be careful, skillful, and safe
  10. Be mindful of real patient safety and perception

We would love to hear from you. If you have any questions or comments please feel free to comment on this post or to reach us by twitter (@baylis_jared, @KClarkEM, @KelownaEM).

References:

  1. Haughey D. Smart Goals [Internet]. [cited Dec 2017]. https://www.projectsmart.co.uk/smart-goals.php
  2. CanMEDS: Better standards, better physicians, better care [Internet]. [cited Dec 2017]. http://www.royalcollege.ca/rcsite/canmeds/canmeds-framework-e
  3. Eppich W, Cheng A. Promoting excellence and reflective learning in simulation (PEARLS): development and rationale for a blended approach to health care simulation debriefing. Simulation in Healthcare. 2015 Apr 1;10(2):106-15.
  4. Spurr J, Gatward J, Joshi N, Carley SD. Top 10 (+ 1) tips to get started with in situ simulation in emergency and critical care departments. Emerg Med J. 2016 Jul 1;33(7):514-6.

In Situ Simulation – Part 1: Quality Improvement Through Simulation

This 2 part series was written by Jared Baylis, JoAnne Slinn, and Kevin Clark. Part 1 is a review of the literature around in situ simulation for quality improvement and part 2 will detail the emergency department in situ simulation program at Kelowna General Hospital including successes, lessons learned, and suggestions for those of you considering starting an in situ simulation program in your centre.

Jared Baylis (@baylis_jared) is a PGY-4 and the chief resident at the Interior Site of UBC’s Emergency Medicine residency program (@KelownaEM). He has an interest in simulation, medical education, and administration/leadership and is currently a simulation fellow through the Centre of Excellence for Simulation Education and Innovation in Vancouver, BC and a MMEd student through Dundee University.

JoAnne Slinn is a Registered Nurse, with a background in emergency nursing, and the simulation nurse educator at the Pritchard Simulation Centre in Kelowna. She recently completed her Masters of Nursing and has CNA certification in emergency nursing.

Kevin Clark (@KClarkEM) is the Site/Program Director for the UBC Emergency Medicine program in Kelowna. He completed a master’s degree in education with a focus on simulation back in the day when high fidelity simulation was new and sim fellowships weren’t yet a thing.  

In situ simulation is a team-based training technique conducted in actual patient care areas using equipment and supplies from that area with people from the care team. (1,2) There have been an increasing number of studies published since 2011, the majority being since 2015, investigating the benefits of in situ simulation as a quality improvement (QI) modality. (1-20) These studies offer a fascinating glimpse into the world of potential that exists within in situ simulation. Here is a quote by Spurr et al. that eloquently describes the potential benefits of in-situ simulation: (19)

“In situ training takes simulation into the workplace. It allows teams to test their effectiveness in a controlled manner, to train for rare events and to interrogate departmental and hospital processes in real time and in real locations. It may also allow teams to uncover latent safety threats in their work environment.”

In this article, we will review recent literature surrounding in situ simulation as a QI tool as a preface to part 2 (next month) where we will describe our process of starting and maintaining an emergency department (ED) based in situ simulation program.

How can in-situ simulation be used for QI?

In the healthcare setting, QI is typically seen as systematic actions that result in measurable positive effects in health care services and/or patient outcomes. (21) There are several ways that in situ simulation can lead to improvement, all of which fall under the umbrella of QI. Previous studies have identified these as improvements in individual provider and/or team performance, identification of latent safety threats (more on this later), and improvement of systems. (11) We will go through several specific examples in the literature which were found by performing a librarian assisted literature search with search terms “in-situ”, “simulation” OR “simulation based education”, “emergency medicine”, and “quality improvement”. The search yielded 39 records of which 19 were excluded for lack of relevance. This left 20 records which were reviewed. The main themes of quality improvement using in situ simulation are described below.

Crisis Resource Management

Simply put, crisis resource management (CRM) speaks to the non-technical skills needed for excellent teamwork. (22) These, according to Carne et al., include knowing your environment, anticipating, sharing, and reviewing the plan, ensuring leadership and role clarity, communicating effectively, calling for help early, allocating attention wisely, and distributing the workload. (22)

Wheeler et al. ran standardized simulation scenarios twice per month on their inpatient hospital units. (1) The units were involved on a rotating basis which provided each unit with at least two in situ simulations per year. They noted 134 safety threats and knowledge gaps over the course of the 21-month study. These led to modification of systems but also provided a means to reinforce the use of assertive statements, role clarity, frequent updates regarding the plan, development of a shared mental model, and overcoming of authority gradients between team members.

Miller et al. had a similar CRM idea in mind with their observational study looking at actual trauma team activation during four different phases. (9) Phase one was pre-intervention, phase two was during a didactic-only intervention, phase three was during an in situ simulation intervention, and phase four was a post-intervention phase. They noted that the mean and median Clinical Teamwork Scale ratings for trauma team activations were highest during the in situ phase. Interestingly though, the scores returned to pre-intervention levels during the post-intervention phase implying that any sustained improvement in teamwork (CRM) is contingent on ongoing regular departmental in situ simulation.

Several other studies had a CRM focus in their research involving in situ simulation and all of them either demonstrated improvement in CRM capabilities or identified CRM issues that could be acted on later as a result of in situ simulation. (10-11, 13-14)

Rare Procedures

The most recent example of using in situ simulation for rare procedure assessment comes from a 2017 publication by Petrosoniak et al. (20) In this study, 20 emergency medicine residents were pretested for baseline proficiency at cricothyroidotomy. Following this, they were exposed to a two-part curriculum involving a didactic session followed by a task trainer session. The residents were then tested afterwards by an unannounced in situ simulation involving cricothyroidotomy while on shift in the emergency department. The mean performance time for cricothyroidotomy decreased by 59 seconds (p < 0.0001) after the two-part curriculum and the global rating scales improved significantly as well. This suggests that in situ simulation can be an effective way of assessing proficiency with rare procedures in the emergency department.  

Task Trainer 1

Task trainers such as this chest tube mannequin can be used to teach a procedure before assessing proficiency using in situ simulation

Latent Safety Threats

Latent safety threats can be thought of as “accidents waiting to happen”. (1) There is mounting evidence that multidisciplinary in situ simulation can identify latent safety threats and even reduce patient safety events. Patterson et al. found that after introducing standardized multidisciplinary in situ simulation to their large pediatric emergency department, they had a reduction in patient safety events from an average of 2-3 per year down to more than one thousand days without a patient safety event. (8) The same author group noted that their in situ simulation program itself was able to detect an average of 1 patient safety event per 1.2 in situ simulations consisting of a 10 minute scenario followed by a 10 minute debrief. (10) These latent safety threats were a mix of equipment failure and knowledge gaps regarding roles.

Petrosoniak et al. noted that, with rare procedures, it is not adequate to just teach an individual how to perform the procedure. (11) One must rather run the scenario in an in situ simulation setting to identify potential latent safety threats as well as other systems and teamwork related issues. (11)

An interesting point of view was highlighted by Zimmerman et al. who raised the idea that demonstrated improvements in patient safety through the use of in situ simulation can be used to justify the existence of an in situ program from an administrative standpoint. (14)

Overall, in situ simulation is better at detecting latent safety threats than traditional lab based simulation and it can improve patient safety without exposing patients to harm and with increased realism over lab based simulation. (16-18)

Systems Issues (e.g. equipment, stocking, labelling)

Systems issues have a lot of overlap with identification of latent safety threats, teamwork, and CRM. However, one notable study is worth reviewing here. Moreira et al. conducted a prospective, block randomized, crossover study in a simulated pediatric arrest scenario comparing use of prefilled, colour coded (by Broselow category) medication syringes with conventional drug administration. (5) They demonstrated compelling results showing that time to drug administration was reduced from 47 seconds in the control group to 19 seconds in the colour coded group. Notably there were 20 critical dosing errors in the control group compared to 0 in the colour coded group.

Testing Adherence to Guidelines

Traditionally, adherence to guidelines is measured by chart review or survey of healthcare practitioners with regard to their practice patterns. Two innovative studies recently considered in situ simulation as a way of assessing adherence to guidelines. Qian et al. ran an observational study at three tertiary care hospitals that see pediatric patients. (4) They introduced a simulation scenario at one of the centres and then compared, post simulation, adherence to their sepsis resuscitation checklist and found that compliance with the checklist was 61.7% in the hospital that ran simulation compared with 23.1% in the two hospitals that did not have simulation (p<0.01). (4) Kessler et al. used standardized in situ simulations to measure and compare adherence to pediatric sepsis guidelines in a series of emergency departments. (7) They did not test simulation as a means of increasing adherence to guidelines but rather used in situ simulation as a tool to determine their baseline adherence rates.

Assessing Readiness for Pediatric Patients and Disaster Preparedness

In many centres, acutely ill pediatric patients are, fortunately, a rarity. Abulebda et al. measured the Pediatric Readiness Score (PRS) pre- and post-implementation of an improvement program that included in situ simulations in a multidisciplinary (MD, RN, RT) emergency department setting. (3) They demonstrated an increase in PRS scores from 58.4 to 74.7 (p = 0.009). This suggests that in situ simulation can be effectively used to prepare emergency department care team for receipt of patient populations that may not be the norm for any given centre.

This can be extended to disaster preparation as well. Jung et al. described how high influxes of patients to the emergency department during disasters can contribute to increased medical errors and poorer patient outcomes. (6) They found that in situ simulation can improve communication as well as knowledge in disaster situations.

Testing New Facilities Prior to Opening

John Kotter outlined an 8 step process for leading change initiatives in his book Leading Change. (23) Step 5 is to “enable action by removing barriers”. (23) This involves removing barriers like inefficient processes and breaking down hierarchies so that work can occur across silos to generate an impact. (23) For anyone that has worked in a facility that has undergone a major renovation, or even an entirely new build, you will have experienced some of the inefficiencies and issues that surface. In situ simulation may provide a medium through which to discover these inefficiencies and to test new facilities before they open for regular use.

Geis et al. completed an observational study that used a series of in situ simulations to test a new satellite hospital and pediatric ED. (16) They had 81 participants (MD, RN, RT, EMS) involved in 24 in situ simulations over 3 months. They identified 37 latent safety threats of which 32 could be rectified prior to the building opening for regular use. These included equipment issues such as insufficient oxygen supply to resuscitate more than one patient at a time, resource concerns such as room layout preventing access by EMS and observation unit beds not fitting through resuscitation room doors, medication issues such as inadequate medication stations, and personnel concerns such as insufficient nursing staff to draw up meds.

Summary & What’s Next?

As you can see there are many useful quality improvement processes that can come directly from a robust ED in situ simulation program. It often takes well defined goals and objectives as well as institutional buy-in to run a successful in situ simulation program. With that in mind, look out for our next post which will detail our emergency department in situ simulation program at the Kelowna General Hospital including aims, structure, participants, results, and lessons learned!

We would love to hear from you. If you have any questions or comments please feel free to comment on this post or to reach us by twitter (@baylis_jared, @KClarkEM, @KelownaEM).

 

References:

  1. Wheeler DS, Geis G, Mack EH, LeMaster T, Patterson MD. High-reliability emergency response teams in the hospital: improving quality and safety using in situ simulation training. BMJ Qual Saf. 2013 Feb 1:bmjqs-2012.
  2. Yajamanyam PK, Sohi D. In situ simulation as a quality improvement initiative. Archives of Disease in Childhood-Education and Practice. 2015 Jun 1;100(3):162-3.
  3. Abulebda K, Lutfi R, Whitfill T, Abu‐Sultaneh S, Leeper KJ, Weinstein E, Auerbach MA. A collaborative in‐situ simulation‐based pediatric readiness improvement program for community emergency departments. Academic Emergency Medicine. 2017 Oct 4.
  4. Qian J, Wang Y, Zhang Y, Zhu X, Rong Q, Wei H. A Survey of the first-hour basic care tasks of severe sepsis and septic shock in pediatric patients and an evaluation of medical simulation on improving the compliance of the tasks. The Journal of emergency medicine. 2016 Feb 29;50(2):239-45.
  5. Moreira ME, Hernandez C, Stevens AD, Jones S, Sande M, Blumen JR, Hopkins E, Bakes K, Haukoos JS. Color-coded prefilled medication syringes decrease time to delivery and dosing error in simulated emergency department pediatric resuscitations. Annals of emergency medicine. 2015 Aug 31;66(2):97-106.
  6. Jung D, Carman M, Aga R, Burnett A. Disaster Preparedness in the Emergency Department Using In Situ Simulation. Advanced emergency nursing journal. 2016 Jan 1;38(1):56-68.
  7. Kessler DO, Walsh B, Whitfill T, Gangadharan S, Gawel M, Brown L, Auerbach M. Disparities in adherence to pediatric sepsis guidelines across a spectrum of emergency departments: a multicenter, cross-sectional observational in situ simulation study. The Journal of emergency medicine. 2016 Mar 31;50(3):403-15.
  8. Patterson MD, Geis GL, LeMaster T, Wears RL. Impact of multidisciplinary simulation-based training on patient safety in a paediatric emergency department. BMJ Qual Saf. 2012 Dec 1:bmjqs-2012.
  9. Miller D, Crandall C, Washington C, McLaughlin S. Improving teamwork and communication in trauma care through in situ simulations. Academic Emergency Medicine. 2012 May 1;19(5):608-12.
  10. Patterson MD, Geis GL, Falcone RA, LeMaster T, Wears RL. In situ simulation: detection of safety threats and teamwork training in a high risk emergency department. BMJ Qual Saf. 2012 Dec 1:bmjqs-2012.
  11. Petrosoniak A, Auerbach M, Wong AH, Hicks CM. In situ simulation in emergency medicine: moving beyond the simulation lab. Emergency Medicine Australasia. 2017 Feb 1;29(1):83-8.
  12. Siegel NA, Kobayashi L, Dunbar-Viveiros JA, Devine J, Al-Rasheed RS, Gardiner FG, Olsson K, Lai S, Jones MS, Dannecker M, Overly FL. In Situ Medical Simulation Investigation of Emergency Department Procedural Sedation With Randomized Trial of Experimental Bedside Clinical Process Guidance Intervention. Simulation in healthcare. 2015 Jun 1;10(3):146-53.
  13. Steinemann S, Berg B, Skinner A, DiTulio A, Anzelon K, Terada K, Oliver C, Ho HC, Speck C. In situ, multidisciplinary, simulation-based teamwork training improves early trauma care. Journal of surgical education. 2011 Dec 31;68(6):472-7.
  14. Zimmermann K, Holzinger IB, Ganassi L, Esslinger P, Pilgrim S, Allen M, Burmester M, Stocker M. Inter-professional in-situ simulated team and resuscitation training for patient safety: Description and impact of a programmatic approach. BMC medical education. 2015 Oct 29;15(1):189.
  15. Theilen U, Leonard P, Jones P, Ardill R, Weitz J, Agrawal D, Simpson D. Regular in situ simulation training of paediatric medical emergency team improves hospital response to deteriorating patients. Resuscitation. 2013 Feb 28;84(2):218-22.
  16. Geis GL, Pio B, Pendergrass TL, Moyer MR, Patterson MD. Simulation to assess the safety of new healthcare teams and new facilities. Simulation in Healthcare. 2011 Jun 1;6(3):125-33.
  17. Fan M, Petrosoniak A, Pinkney S, Hicks C, White K, Almeida AP, Campbell D, McGowan M, Gray A, Trbovich P. Study protocol for a framework analysis using video review to identify latent safety threats: trauma resuscitation using in situ simulation team training (TRUST). BMJ open. 2016 Nov 1;6(11):e013683.
  18. Ullman E, Kennedy M, Di Delupis FD, Pisanelli P, Burbui AG, Cussen M, Galli L, Pini R, Gensini GF. The Tuscan Mobile Simulation Program: a description of a program for the delivery of in situ simulation training. Internal and emergency medicine. 2016 Sep 1;11(6):837-41.
  19. Spurr J, Gatward J, Joshi N, Carley SD. Top 10 (+ 1) tips to get started with in situ simulation in emergency and critical care departments. Emerg Med J. 2016 Jul 1;33(7):514-6.
  20. Petrosoniak A, Ryzynski A, Lebovic G, Woolfrey K. Cricothyroidotomy In Situ Simulation Curriculum (CRIC Study): Training Residents for Rare Procedures. Simulation in Healthcare. 2017 Apr 1;12(2):76-82.
  21. US Department of Health and Human Service. Quality improvement [Internet]. 2011 April [cited Dec 2017]. https://www.hrsa.gov/sites/default/files/quality/toolbox/508pdfs/qualityimprovement.pdf
  22. Carne B, Kennedy M, Gray T. Crisis resource management in emergency medicine. Emergency Medicine Australasia. 2012 Feb 1;24(1):7-13.
  23. Kotter JP. Leading change. Harvard Business Press; 1996.

Validity – Starting with the Basics

This critique on validity and how it relates to simulation teaching was written by Alia Dharamsi, a PGY 4 in Emergency Medicine at The University of Toronto and 2017 SHRED [Simulation, Health Sciences, Resuscitation for the Emergency Department] Fellow.

When designing simulation exercises that will ultimately lead to the assessment and evaluation of a learner’s competency for a given skill, the validity of the simulation as a teaching tool should be addressed on a variety of levels. This is especially relevant when creating simulation exercises for competencies outside of the medical expert realm such as communication, team training and problem solving.

As a budding resuscitationist and simulationist, understanding validity is vital to ensuring that the simulation exercises that I create are actually measuring what they intend to measure, that is, they are valid (Devitt et al.).  As we look ahead to Competency Based Medical Education (CBME), it will become increasingly important to develop simulation exercises that are not only interesting and high-yield with respect to training residents in critical skills, but also have high validity with respect to reproducibility as well as translation of skills into real world resuscitation and patient care.

In order to better illustrate the various types of validity and how they can affect simulation design, I will present an example of an exercise I implemented as I was tasked with teaching a 5 year old to tie her shoelaces. In order to do so I taught her using a model, very similar to this one I found on Pinterest:

shoelacesWe first learned the rhyme, then used this template to practice over and over again. The idea behind using the model was to provide the reference of the poem right next to the shoes, but also to enlarge the scale of the shoes and laces, since her tiny feet meant tiny laces on shoes that were difficult for her to manipulte. Also, we could do this exercise at the table, which allowed us to be comfortable as we learned. At the end of the exercise, I gave her a “test” and asked her to tie the cardboard shoes to see if she remembered what we learned. While there was no rigorous evaluation

scheme, the standard was that she should be able to tie the knot to completion (competency), leading to two loops at the end.

I applied my simulation learning to this experience to asses the validity of this exercise in improving her ability to tie her laces. The test involved her tying these laces by herself without prompting.

Face validity: Does this exercise appear to test the skills we want it to?

Very similar to “at face value,” face validity is how much a test or exercise looks like it is going to measure what it intends to measure.  This can be assessed by an “outsider” perspective, like her mom if she feels that this test could measure her child’s ability to tie a shoe. Whether this test works or not is not the concern of face validity, rather it is whether it looks like it will work (Andale). Her mom thought this exercise would be useful in learning how to tie shoes, so face validity was achieved.

Content validity: Does the content of this test or exercise reflect the knowledge the learner needs to display? 

Content validity is the extent to which the content in the simulation exercise is relevant to what you are trying to evaluate (Hall, Pickett and Dagnone). Content validity requires an understanding of the content required to either learn a skill or perform a task. In Emergency Medicine, content validity is easily understood when considering a simulation exercise designed to teach learners to treat a Vfib arrest—the content is established by the ACLS guidelines, and best practices have been clearly laid out. For more nebulous skill sets (communication, complex resuscitations, rare but critical skills like bougie assisted cricothyroidotomies, problem solving, team training), the content is not as well defined, and may require surveys from experts, panels, and reviews by independent groups (Hall, Pickett and Dagnone). For my shoelace tying learner, the content was defined as being a single way to tie her shoelaces, however it did not include the initial lacing of the shoes or how to tell which shoe is right or left, and most importantly, the final test did not include these components. Had I tested her on lacing or appropriately choosing right or left, I would have not had content or face validity. This speaks to choosing appropriate objectives for a simulation exercise—objectives are the foundation upon which learners develop a scaffolding for their learning. If instructors are going to use simulation to evaluate learners, the objectives will need to clearly drive the content, and in turn the evaluation.

Construct Validity: Is test structured in a way that actually measures what it claims to?

In short, construct validity is assessing if you are measuring what you intend to measure.

My hypothesis for our exercise was that any measurable improvement in her ability to tie her shoelaces would be attributable to the exercise, and that with this exercise she would improve her ability to complete the steps required to tie her shoelaces. At the beginning of the shoelace tying exercise, she could pick up the laces, one in each hand, and then looked at me mostly blankly for the next steps. At the end of the exercise and for the final “test,” she was able to hold the laces and complete the teepee so it’s “closed tight” without any prompting. The fact that she improved is evidence to support the construct, however construct validity is an iterative process and requires different forms of evaluation to prove the construct. To verify construct validity, other tests with similar qualities can be used. For this shoelace tying exercise, we might say that shoelace tying is a product of fine motor dexterity and fine motor dexterity theory states that as her ability to perform other dexterity based exercises (tying a bow, threading beads onto a string) improves, so would her performance in her test. To validate our construct, we could they perform the exercise over time and see if her performance improves as her motor skills develop, or compare her performance on the test to an older child/adult who would have better motor skills and would perform better on the test.

External validity: Can the results of this exercise or study be generalized to other populations or settings, and if so, which ones?

With this shoelace tying exercise, should the results be tested and a causal relationship be established between this exercise and ability to tie shoes, then the next step would be to see if the results can be generalized to other learners in different environments. This would require further study and a careful selection of participant groups and participants to reduce bias. This would also be an opportunity to vary the context of the exercise, level of difficulty, and to introduce variables to see if the cardboard model could be extrapolated to actual shoe tying.

 Internal validity: Is there another cause that explain my observation?

With this exercise, her ability to tie laces improved over the course of the day. In order to measure internal validity, it is important to assess if any improvement or change in behaviour could be attributed to another external factor (Shuttleworth). For this exercise, there was only one instructor and one student in a consistent environment. If we had reproduced this exercise using a few novice shoelace tiers and a few different instructors it may add confounders to the experiment which would then make it less clear to assess if improvements in shoelace tying are attributed to the exercise or the instructors. Selection bias can also affect internal validity— for example selecting participants who were older (and therefore had more motor dexterity to begin with) or who had previous shoelace tying training would likely affect the outcome. For simulation exercises, internal validity can be confounded by multiple instructors, differences in the mannequin or simulation lab, as well as different instructor styles which may lead to differences in learning. Overcoming these challenges to internal validity is partly achieved by robust design, but also by repeating the exercise to ensure that the outcomes are reproducible across a wider variety of participants than the sample cohort.

There are many types of validity, and robust research projects require an understanding of validity to guide the initial design of a study or exercise. Through this exercise in validity I was able to better take the somewhat abstract concepts of face validity and internal validity and ground them into practice through a relatively simple exercise. I have found that doing this has helped me form a foundation in validity theory, which I can now expand into evaluating the simulation exercises that I create.

 

REFERENCES

1) Andale. “Face Validity: Definition and Examples.” Statistics How To. Statistics How to 2015. Web. October 20 2017.

2) Devitt, J. H., et al. “The Validity of Performance Assessments Using Simulation.” Anesthesiology 95.1 (2001): 36-42. Print.

3) Hall, A. K., W. Pickett, and J. D. Dagnone. “Development and Evaluation of a Simulation-Based Resuscitation Scenario Assessment Tool for Emergency Medicine Residents.” CJEM 14.3 (2012): 139-46. Print.

4) Shuttleworth, M. (Jul 5, 2009). Internal Validity. Retrieved Oct 26, 2017 fromExplorable.com: https://explorable.com/internal-validity

5) Shuttleworth, M. (Aug 7, 2009). External Validity. Retrieved Oct 26, 2017 from Explorable.com: https://explorable.com/external-validity

Simulation-Based Assessment

This critique on simulation-based assessment was written by Alice Gray, a PGY 4 in Emergency Medicine at The University of Toronto and 2017 SHRED [Simulation, Health Sciences, Resuscitation for the Emergency Department] Fellow.

You like to run simulations.  You have become adept at creating innovative and insightful simulations. You have honed your skills in leading a constructive debrief.  So what’s next? You now hope to be able to measure the impact of your simulation.  How do you design a study to measure the effectiveness of your simulation on medical trainee education?

There are numerous decisions to make when designing a sim-based assessment study. For example, who is included in the study?  Do you use direct observation or videotape recording or both? Who will evaluate the trainees? How do you train your raters to achieve acceptable inter-rater reliability? What are you assessing – team-based performance or individual performance?

One key decision is the evaluation tool used for assessing participants.  A tool ideally should:

  • Have high inter-rater reliability
  • Have high construct validity
  • Be feasible to administer
  • Be able to discriminate between different level of trainees

Two commonly used sim-based assessment tools are Global Rating Scales (GRS) and Checklists.  Here, these tools will be compared to evaluate their role for the assessment of simulation in medical education.

Global Rating Scales vs Checklists

GRS are tools that allow raters to judge participants’ overall performance and/or provide an overall impression of performance on specific sub-tasks.1   Checklists are lists of specific actions or items that are to be performed by the learner.  Checklists prompt raters to attest to directly observable actions. 1

Many GRS ask raters to utilize a summary to rate overall ability or to rate a “global impression” of learners.  This summary item can be a scale from fail to excellent, as in Figure 1.2 Another GRS may assess learners’ abilities to perform a task independently by having raters mark learners on a scale from “not competent” to “performs independently”.  In studies, the overall GRS has shown to be more sensitive at discriminating between level of experience of learners than checklists.3,4,5  Other research has shown that GRS demonstrate superior inter-item and inter-station reliability and validity to checklists.16,7,8 GRS can be used across multiple tasks and may be able to better measure expertise levels in learners. 1

 Some of the pitfalls of GRS are that they can be quite subjective.  They also rely on “expert” opinion in order to be able to grade learners effectively and reliably.

GRS

Figure 1: assessment tool used by Hall et al in their study evaluating a simulation-based assessment tool for emergency medical residents using both a checklist and global assessment rating.2

Checklists, on the other hand, are thought to be less subjective, though some studies may argue this is false as the language used in the checklist can be subjective.10 If designed well, however, checklists provide clear step-by-step outlines for raters to mark observable behaviours.  A well-designed checklist would be easy to administer so any teacher can use it (and not rely on experts to administer the tool).  By measuring defined and specific behaviours, checklists may help to guide feedback to learners.

However, some pitfalls of checklists are that high scores have not been shown to rule out “incompetence” and therefore may not be accurate at evaluating skill level. 9.10 Checklists may also comment on multiple areas of competence, which may attribute to lower-item reliability.1  Other studies have found that despite checklists being theoretically easy to use, the inter-rater reliability was consistently low.9   However, a systematic review of the literature found that checklists performed similarly high to GRS in terms of inter-rater reliability. 1

 

TABLE 1: Pros and Cons of Global Rating Scales and Checklists
 

                  +

                   –

Global Rating Scores

 

§  Higher internal reliability

§  More sensitive in defining level of training

§  Higher inter-station reliability and generalizability

 

§  Less precise

§  Subjective rater judgement and decision making

§  May require experts  or more rater training in order to rate learners

Checklists

 

§  Good for the measurement of defined steps or specific components of performance

§  Possible more objective

§  Easy to administer

§  Easy to identify define actions for learner feedback

§  Possibly lower reliability

§  Requires dichotomous ratings, possibly resulting in loss of information

 

 

 

 

 

Conclusion

With the move towards competency-based education, the use of simulation will play an important role in evaluating learners’ competencies.  Simulation-based assessments allows for direct evaluation of individuals knowledge, technical skills, clinical reasoning, and teamwork. Assessment tools play an important component of medical education.

An optimal assessment tool for evaluating simulation would be reliable, valid, comprehensive, and allow for discrimination between learners abilities.  Global Rating Scales and Checklists each have their own advantages and pitfalls and each may be used for the assessment of specific outcome measures.  Studies suggest that GRS have some important advantages over checklists, yet the evidence for checklists appears slightly improved than previously thought.  Yet, whichever tool is chosen, it is critical to design and test the tool to ensure that it appropriately assesses the desired outcome.   If feasible, using both a Checklist and Global Rating Scale would help to optimize the effectiveness of the sim-based education.

 

REFERENCES

1           Ilgen JS et al. A systematic review of validity evidence for checklists versus global rating scales in simulation-based assessment. Med Educ. 2015 Feb;49(2):161-73

2           Hall AK. Development and evaluation of a simulation-based resuscitation scenario assessment tool for emergency medicine residents. CJEM. 2012 May;14(3):139-46

3           Hodges B et al.  Analytic global OSCE ratings are sensitive to level of training. Med Educ. 2003;37:1012–6

4           Morgan PJ et al. A comparison of global ratings and checklist scores from an undergraduate assessment using an anesthesia simulator. Acad Med. 2001;76(10) 1053-5

5           Tedesco MM et al. Simulation-based endovascular skills assessment: the future of credentialing? J Vasc Surg. 2008 May;47(5):1008-11

6           Hodges B at al. OSCE checklists do not capture increasing levels of expertise. Acad Med. 1999;74:1129–1134

7           Hodges B and McIlroy JH. Analytic global OSCE ratings are sensitive to level of training. Med Educ. 2003;37:1012–1016

8           Regehr G et al. Comparing the psychometric properties of checklists and global rating scales for assessing performance on an OSCE-format examination. Acad Med. 1998;73:993-7

9           Walsak A et al. Diagnosing technical competence in six bedside procedures: comparing checklists and a global rating scale in the assessment of resident performance. Acad Med. 2015 Aug;90(8):1100-8

10          Ma IW et al. Comparing the use of global rating scale with checklists for the assessment of central venous catheterization skills using simulation. Adv Health Sci Educ Theory Pract. 2012;17:457–470

 

Simulation Design

This critique on simulation design was written by Alice Gray, a PGY 4 in Emergency Medicine at The University of Toronto and 2017 SHRED [Simulation, Health Sciences, Resuscitation for the Emergency Department] Fellow.

Have you ever designed a simulation case for learners? If so, did you create your sim on a “cool case” that you saw?  I think we have all been guilty of this; I know I have. Obviously a unique, interesting case should make for a good sim, right?  And learning objectives can be created after the case creation?

Recently, during my Simulation, Health Sciences and Resuscitation in the ED fellowship (SHRED), I have come to discover some theory and methods behind the madness of creating sim cases. And I have pleasantly discovered that rather than making things more complicated, having an approach to sim creation can not only help to guide meaningful educational goals but also makes life a whole lot easier!

I find it helpful to think of sim development in the PRE-sim, DURING-sim, and POST-sim phases.

In a systematic review of simulation-based education, Issenberg et al, describe the 10 aspects of simulation interventions that lead to effective learning, which I will incorporate these the different phases of sim design.1

PRE-sim

 Like many things, the bulk of the work and planning are required in the PRE phase.

When deciding to use sim or not as a learning tool, the first step should be to ask what modality is most appropriate based on the stated learning objectives?1 A one-sized fits all approach is not optimal for learning. This is stated well in a paper by Lioce et al about simulation design that the “modality is the platform of the experience”.2 For me, one of the most important things to take into consideration is the following: can the learning objectives be appropriately attained though simulation, and if so, what type of simulation?  For example, if the goal is to learn about advanced airway adjuncts, this may be best suited by repetitive training on an airway mannequin or a focused task trainer. If the goal is to work through a difficult airway algorithm, perhaps learners should progress through cases requiring increasingly difficult airway management using immersive, full-scale simulation.  You can try in-situ inter-professional team training to explore systems-based processes.  Basically, a needs assessment is key. The paper by Lioce et al. describe guidelines when working through a needs assessment.2

 Next, simulation should be integrated into an overall curriculum to provide the opportunity to engage in repetitive (deliberate) practice:1 Simulation in isolation may not produce effective sustainable results.3  An overall curriculum development, while time consuming to develop and implement, is a worthy task.  Having one simulation build upon others may improve learning through spaced repetition, varying context, delivery and level of difficulty.

This can be difficult to achieve given constrained time, space and financial resources.  Rather than repeat the same cases multiple times, Adler et al created cases that had overlapping themes; the content and learning objectives differed between the cases but they had similar outcome measures. 3 This strategy could be employed in curriculum design to enhance repeated exposure while limiting the number of total sessions required.

Effective programmatic design should facilitate individualized learning and provide clinical variation: 1 Lioce et al, refer to a needs assessment as the foundation for any well-designed simulation.2 Simulation has addressed certain competencies residents are supposed to master – airway, toxicology, trauma, pediatrics, etc – without seeking input a priori on the learning needs of the residents. It may be valuable to survey participants and design simulations based on perceived curriculum gaps or learning objectives or try to assess baseline knowledge with structured assessment techniques prior to designing cases and curricula. (NB: Such a project is currently underway, led by simulation investigators at Sunnybrook Hospital in Toronto).

 Learners should have the opportunity to practice with increasing levels of difficulty:1 It is logical that learners at different stages of their training require different gradations of difficultly. Dr. Martin Kuuskne breaks down the development of simulation cases into their basic elements.  He advocates for thinking of each sim objective in terms of both knowledge and cognitive process.4

The knowledge components can divided into the medical and critical resource management (CRM), or more preferably, non-technical skills. 5 Medical knowledge objectives are self-explanatory and should be based on the level of trainee. Non-technical skills objectives typically relate to team-based communication, leadership, resource utilization, situational awareness and problem solving.6  Kuuskne’s post makes the very salient point that we need to limit the number of objectives in both these domains as this can quickly overwhelm learners and decreased absorption of knowledge.

The cognitive processes objectives can also be developed with increasing complexity, depending on the level of the trainee.4  For example, at the lowest level of learning is “remembering” – describing, naming, repeating, etc.   At the highest levels of learning is “creating” – formulate, integrate, modify, etc.  A case could be made to involve senior learners in creating and implementing their own sim cases.

DURING-sim

 As part of creating scripts and cases, case designers should try to anticipate learner actions and pitfalls.  There will always be surprises and unexpected actions (a good reason to trial, beta test and revise before deploying). On EMSimCases.com, Kuuskne outlines his approach to creating the case progression, and how can it be standardized.6  The patient in the simulation has a set of definite states: i.e. the condition of the patient created by vital signs and their clinical status.6  We can think of progression to different states through learner modifiers and triggers: Modifiers are actions that make a change in the patient, whereas triggers are actions that changes the state of the patient.  I found this terminology helpful when outlining case progression.

Simulation allows for standardization of learning in a controlled environment: 11 The truth of residency training is that even in the same program, residents will all have uniquely different experience.  One resident ahead of me, at graduation, had taken part in 10 resuscitative thoracotomies.  Many residents in the same class had not seen any.  We cannot predict what walks through our doors but we can try to give residents the same baseline skills and knowledge to deal with whatever does.

POST-sim

 Feedback is provided during the learning experience1 unless in an exam-type setting, where it should be given after.  It is important again to note the necessity of limiting the number of learning objectives, so you have room for scripted and unscripted topics of conversation.  Debriefing the case should be a breeze, as it should flow from the case objectives created at the beginning.

Going further than “the debrief” is the idea of how we evaluate the value of sim. To me, this is the most difficult and rarely done.  Evaluation of each sim case should be sought from participants and stakeholders, in addition to the pilot testing.  That information needs to be fed forward to make meaningful improvements in case design and implementation.

Outcomes or benchmarks should be clearly defined and measured.  The randomized study by Adler et al created clearly defined critical rating checklists during the development and needs assessment of their sim cases. 3 They then tested each case twice on residents to get feedback.

In summary, although a “cool case” is always interesting, it doesn’t always make the best substrate for teaching and learning in the simulator.  Thoughtful case creation for simulation needs to go beyond that, breaking down the design process into basic, known components and using a structured theory-based approach in order to achieve meaningful educational outcomes.

REFERENCES:

1               Issenberg et al. Features and uses of high-fidelity medical simulations that lead to effective learning: A BEME systematic review. Med Teach. 2005;27:10 –28.

2               Lioce et al. Standards of Best Practice: Simulation Standard IX: Simulation Design.  Clinical Stimulation in Nursing. 2015;11:309-315.

3               Adler et al. Development and Evaluation of a Simulation-Based Pediatric Emergency Medicine Curriculum. Academic Medicine. 2009;84:935-941.

4               Kuuskne M. How to develop targeted simulation learning objectives – Part 1: The Theory. April 21, 2015 https://emsimcases.com/2015/04/21/how-to-develop-targeted-simulation-learning-objectives-part-1-the-theory/

5               Kuuskne M. How to develop targeted simulation learning objectives – Part 2: The Practice. June 15, 2015. https://emsimcases.com/2015/06/16/how-to-develop-targeted-simulation-learning-objectives-part-2-the-practice/

6               Kuuskne M. Case Progression: states, modifiers and triggers. May 19, 2015. ​https://emsimcases.com/2015/05/19/case-progression-states-modifiers-and-triggers/

 

 

 

Cashing out by buying in – How expensive does a mannequin have to be to call a simulation “high fidelity?”

This critique on simulation fidelity was written by Alia Dharamsi, a PGY 4 in Emergency Medicine at The University of Toronto and 2017 SHRED [Simulation, Health Sciences, Resuscitation for the Emergency Department] Fellow.

How expensive does a mannequin have to be to call a simulation “high fidelity?”

mannequin

That was the question I was pondering this week, as our SHRED theme this month is simulation in medical education. In my 4th year of residency at University of Toronto, most of my simulation training has been in one of our two simulation labs, using one of our three “high fidelity” mannequins. However, even though the simulation labs and equipment have been very consistent over the past few years, I have found a fluctuating attentiveness and “buy-in” to these simulation sessions: some have felt very real and have resulted in a measurable level of stress and urgency to improve the patient’s (read: mannequin’s) outcome while others have felt like a mandatory hoop through which to jump in order to pass a rotation.

It should not come to anyone’s surprise to note that in Emergency Medicine, simulation is a necessary part of our development as residents. Simulation based medical education allows trainees to meet standards of care and training, mitigates risks to patients, develops clinical competencies, improves patient safety, aids in managing complex patient encounters, and protects patients [1]. Furthermore, in emergency medicine, simulation has allowed me to practice rare and life-saving critical skills like cricothyroidotomies and thoracotomies before employing them in real-time resuscitations. Those who know me will tell you when it comes to simulation I fully support its use as an educational tool, but there does still seem to be an ebb and flow to how much I commit to each sim case that I participate in as a learner.

During a CCU rotation,  I was involved in a relatively simple “chest pain” simulation exercise. As the circulating resident, I was tasked with giving the patient ASA to chew. In that moment I didn’t just simulate giving ASA; I took the yellow lid from an epinephrine kit (it looked like a small circular tablet) and put it in the mannequin’s mouth asking him to chew it. I did not think much of it until our airway resident was preparing to intubate, and the whole case derailed into an “ airway foreign body” scenario—to the confusion of the simulationists sitting behind the window who didn’t know how that foreign body got into the airway in the first place. Why did I do that? I believe it’s because I bought into the scenario, and in my eyes that mannequin was my patient, and my patient needed the ASA to chew. The case of a chest pain—although derailed into a difficult airway case by my earnest delivery of medications—was in the context of a residency rotation where I was expected to manage the CCU independently overnight. That context allowed me to buy-into the case because I knew these skills were transferrable to my role as a CCU resident. My buy-in has had less to do with the mannequin and the physical space and everything to do with how the simulation fit into the greater context of my current training.

There has been discussion amongst simulationists that there should be a frame shift away from fidelity and towards educational effectiveness: helping to engage learners, providing framework and context to aid them in suspending their disbelief, and providing structure to apply the simulation to real-time resuscitations in order to enhance learner engagement [2]. The notion of functional fidelity is one that resonates with me as a budding simulationist; if a learner has an educational goal and is oriented to how the simulation will provide the context and platform to learn that goal, the learner may more easily “project fidelity onto the simulation scenario.” That is, the learner will buy-into the simulation [2].

 So how do we facilitate buy-in?

We can start by orienting learners meaningfully and intentionally to the simulation exercises. [3] This can be accomplished by demonstrating how the concepts from the simulation are transferrable to other contexts which can allow the learners to engage on a deeper level with the simulation and see the greater applicability of what they are learning [2].  We can’t assume learners understand why or how this exercise is applicable to them. A chest pain case for a senior resident in emergency medicine has very different learning outcomes than the same case for an off service junior resident rotating through the ER; the same can be said for a resident primarily working in the hospital or working in an outpatient clinic. Tailoring case objectives to learners specifically provides an opportunity to provide relevant skills to learners in the context of their training, giving them a reason to buy-in to the scenario session. Moving beyond “to learn…” or “to outline the management of…”, I would advocate that specifically outlining objectives for the level and specialties of participating learners will help them see the employability of the skills they gain in the simulation.

We can also use those specific objectives and context we start the simulation session with to foster a more directed debrief. The post-simulation discussion should not only cover medical management principles but also specific discussion about what learners would do if they encountered a similar situation in their specific work environment (clinic, ward, etc), transferring the learning out of the simulation lab and into real world medical practice.

If we are going to see simulation as a tool, let’s see it as one of those fancy screwdrivers with multiple bits, and stop trying to use the screwdriver handle as a hammer for every nail. No one mannequin, regardless of how expensive and how many fancy features it has, can replace the role of a thoughtful facilitator who can help learners buy-into the simulation. If facilitators take the time to orient the learner to their specific learning objectives and then reinforce that context in the debrief discussion, they can increase the functional fidelity of the session and aid learners in maximizing their benefit from each simulation experience.

 

Citations 

  1. Ziv, A., Wolpe, P. R., Small, S. D., & Glick, S. (2003). Simulation-Based Medical Education. Academic Medicine, 78(8), 783-788. doi:10.1097/00001888-200308000-00006
  2. Hamstra, S. J., Brydges, R., Hatala, R., Zendejas, B., & Cook, D. A. (2014). Reconsidering Fidelity in Simulation-Based Training. Academic Medicine, 89(3), 387-392. doi:10.1097/acm.0000000000000130
  3. Issenberg, S. B., Mcgaghie, W. C., Petrusa, E. R., Gordon, D. L., & Scalese, R. J. (2005). Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Medical Teacher, 27(1), 10-28. doi:10.1080/01421590500046924

 

Moulage Tips and Tricks

This week’s post is written by Dr. Cheryl ffrench. She is the Director of Simulation for the Department of Emergency Medicine at the University of Manitoba and is also one of the advisory board members for EMSimCases.

Facial Trauma

Emergency Medicine is Sensory

Emergency Medicine is a very sensory specialty. Walking into the resuscitation room, the appearance, sound and sometimes smell of the patient provides a wealth of information before introductions are even made. Recognition of the “sick” patient is something we strive to teach our residents and medical students. Simulation is an excellent tool to help teach core emergency medicine skills. The principles of crisis resource management are essential to the practice of emergency medicine and simulation provides us with an excellent tool to bring them to light. However when the simulation stem describes an 80 year old female patient in respiratory failure and the learners walk into a room to find a manikin that more closely resembles a 25 year old Arnold Schwarzneger, despite being asked to suspend their disbelief, their approach to the patient can’t help but be different. Similarly when asked to assess the trauma patient, the visual cues of finding the stab wound or open fracture help to re-enforce both their clinical skills and the simulation experience. Most manikins used in simulation today are large robust health males in the prime of their simulated lives. However, this does not reflect the patient population of most emergency practices.

Simple Fixes to Improve Realism

Eldery man

Simple measures can turn a “he” into a “she” like remembering to exchange the external genitalia and adding a simple wig. Suddenly the patient has a more feminine appearance that reflects the other 50% of our patient population. A grey wig can make him (or her) age to an extent but investing in some costume masks found at any party store can take the manikin’s healthy 25 year old skin and give the illusion of a face wrinkled by time. These masks fit most standard adult size manikins. The softer and more form fitting the mask, the nicer it is for the learner to work with when intubating but even the less pliable masks have little impact on airway management so long as they come with an open mouth on the mask.

Moulage in Trauma

Body on SpineboardThe wounds and injuries that our trauma patient present with often dictate our index of suspicion for the severity of their illness and thus our level of concern. Seeing the bleeding wound in the centre of the chest or over the anterior neck raises a level of anxiety and serves as a constant reminder of the seriousness of the trauma. That is difficult to create if the learners are simply told by the confederate that there is a “big stab wound” or an “expanding hematoma” as these findings can be easily lost or forgotten without the visual reminder in the midst of a chaotic simulation case. Stab wounds can be easily added with some basic halloween “wounds” found at any halloween or party store. For the more creative, they can also be even more realistic though some simple techniques that are described at the end of this blog.

More Simple Adjuncts

The placement of a pregnant abdomen on the trauma patient provides another prompt for the unique management principles for that patient population. Place a fetal manikin in the belly and suddenly you have a perimortum csection case that will never be forgotten. Bubble wrap underneath the skin on the manikin’s neck creates the textile feel of subcutaneous emphysema which if also moulaged with bruising on the skin provides your learners with a frightening airway scenario that keeps most emergency practioners up at night. Moulage combined with either the use of preset vocals or some voice over acting will help to create a unique emergency medicine simulation experience that your learners won’t soon forget.

Mannequin Maintenance

When applying makeup to Mannequin skins, it is important to first prepare the area so that the makeup does not stain the skin. Here are some helpful tricks, courtesy of Jane Fedoruk, a Simulation Technician at the University of Manitoba:

  1. Wipe the area for application with a thin layer of Vaseline or baby oil.
  2. Lightly wipe again with a dry cloth to remove excess oil.
  3. Apply makeup lightly and avoid rubbing it into the pores of the skin.
  4. Avoid putting the makeup on until as late as possible. Leaving the colours on for extended periods of time increases the probability of a stain.
  5. As much as possible, use only products provided or sanctioned by the mannequin company.
  6. Be particularly careful when using red or blue based makeup as they stain the most.
  7. Remove the moulage as soon as possible after use, and clean the area with mannequin cleaner to remove the oils.

All photos courtesy of Cheryl ffrench and Jane Fedoruk.

Debriefing Techniques – the Art of Guided Reflection

Simulation without debriefing is really just an expensive way of either making learners feel badly about themselves or allowing learners to practice performing poorly. This is why the theory behind debriefing is so important.

Debriefing is one of the most amazing teaching tools available to an instructor. Debriefing allows insight into a learner’s thought process such that an instructor can tailor teaching to a learner’s specific needs. Kolb’s learning cycle1 and Schonn’s description of the Reflective Practitioner2 allow us to see why debriefing is such a useful tool. We must actively reflect on an experience to learn from it; debriefing allows educators to help guide that reflection.

PEARLS Framework

While debriefing is arguably the most important component of simulation education, it is also a difficult skill to acquire. Eppich and Cheng3 have published an excellent approach to debriefing that reviews many of the key steps a novice simulation educator should aim to follow. They have called it the PEARLS approach (Promoting Excellence and Reflective Learning in Simulation). We will review its four phases here.

1. Reactions Phase

This is where learners are invited to express their raw feelings about the case. Often, learners will do this without a formal invitation (for example, you may hear initial reactions while walking from the simulator to the debriefing room). It is important to invite all learners to have a chance to vent during this stage.

2. Description Phase

This phase begins by asking a learner to describe what they think the case was about. This allows the educator and the learners to see if they are on the same page. Often, this leads to important issues for discussion during the next phase.

Screen Shot 2015-06-28 at 1.15.24 PM

3. Analysis Phase

Here, the educator must tailor their style of debriefing to suit both the learners in the room and the time available for the debriefing. This phase is what educators often think about when they envision debriefing. Essentially, the analysis phase is where learners can go through guided reflection.

+/Δ Method

There are two common styles of guided reflection described. The first is the +/Δ method. This involves probing learners as to what went well (the +) and what could be improved or changed for the future (the Δ). Many who are new to debriefing find themselves turning to this style at first.

Advocacy/Inquiry Method

A second, commonly used style is called advocacy/inquiry.4 This approach leads to incredible insights into the knowledge and performance of the learners. It can be somewhat more challenging to execute well. The basic premise is that one must first describe a noted performance gap. This is followed by a question as to the learner’s frame of mind at the time of the performance. The learner’s answer leads the instructor as to what learning points may need to be addressed. Sometimes, the entire room of learners is unsure of a next appropriate step in management. In this case, the debriefer must simply provide directed teaching. In other cases, the learner has made a slight cognitive error. Often, these can be addressed through facilitated discussion with other learners.

4. Summary Phase

Once the group has gone through all the desired learning objectives in the analysis phase, it is imperative that the instructor guides a review of key points related to the objectives. If time is short, the instructor can provide the summary himself. If time is more abundant, it can be useful to have the learners go through their key learning points.

As we can see, a fair amount of effort is required to facilitate an excellent debrief. With frameworks like the PEARLS approach, experienced and inexperienced educators alike have a practical means upon which to build their debriefing skills.

What tips and tricks do you use in your debriefing?

References:

  1. Kolb DA. Experiential earning: experience as the source of learning and development. Englewood Cliffs, NJ: Prentice Hall; 1984.
  2. Schon D. The Reflective Practitioner: How Professionals Think in Practice. New York: Basic Books. 1983.
  3. Eppich, W., Cheng, A. Promoting excellence and reflective learning in simulation (PEARLS). Simul Healthc. 2015:1. doi:10.1097/SIH.0000000000000072.
  4. Rudolph, JW., Simon R., Rivard P., Dufresne RL., Raemer, DB. Debriefing with good judgment: combining rigorous feedback with genuine inquiry. Anesthesiol Clin. 2007;25(2):361-376.