Simulation Solutions for Low Resource Settings

This review on simulation teaching in a low resource setting was written by Alia Dharamsi, a PGY 4 in Emergency Medicine at The University of Toronto and 2017 SHRED [Simulation, Health Sciences, Resuscitation for the Emergency Department] Fellow after her Toronto- Addis Ababa Academic Collaboration in Emergency Medicine (TAAAC-EM) elective. 

This past November I participated in an international elective in Addis Ababa, Ethiopia as a resident on the TAAAC-EM team. TAAAC-EM sends visiting faculty to teach and clinically mentor Ethiopian EM residents 3-4 times a year. Teaching trips cover a longitudinal, three-year curriculum through didactic teaching sessions, practical seminars, and bedside clinical supervision.

One of the areas of development identified by the residents was a yearning for more simulation exercises. As a budding simulationist and SHRED fellow, I was particularly keen to help contribute to the curriculum. Starting from the basics, we created two simulation curricula:

  • Rapid Cycle Deliberate Practice (RCDP) simulation exercises that covered basic Vfib, Vtach, PEA and asystolic arrests in short 5-minute simulation and debrief cycles and,
  • Managing airway exercises; a series of three cases addressing preparing for intubation, intubating a patient in respiratory extremis, and then troubleshooting an intubated and ventilated patient using the DOPES mnemonic

The local simulation centres were well relatively equipped however there were no high fidelity mannequins or elaborate set of monitors. We had to use an intubating mannequin head and torso for the airway simulation and a basic CPR mannequin for the RCDP exercise.

Picture1

Set up for the airway simulation exercises

For additional materials, we had to MacGyver some of the tools we needed to create these simulation scenarios, and from doing so we learned some valuable lessons. This post will outline some of the ways we created a higher sense of fidelity, even with low technology resources, and created high yield learning experiences for the residents.

Understand what actual resources are available to the trainees in the ED before you try to create a simulation exercise

It took a few weeks of working in the ED to really understand what resources are available to the staff and residents. For the most part there was no continuous saturation monitoring. X-rays are not typically done until after the patient is stabilized because the patient has to be taken out of the resuscitation room to the imaging department. Lastly, some medications that we use in Canada on a daily basis were simply not available. The first step to creating simulation in low resource settings is to understand the available resources.

Communication skills do not require a high technology environment; Neither do CPR or BVM skills!

Both simulation exercises focussed on team communication, closed loop communication, team preparation for interventions (like intubation), and team leadership. While our supplies were basic, the simulations lent themselves well to discussing improved communication methods in the ED during resuscitations. We also emphasized excellent quality CPR and reiterated basic bagging techniques. We can all use refreshers on the basics and this is one way me made the simulations fruitful for learners of all levels.

Picture2

GIFs make great rhythm generators

For these simulation sessions we did not consistently have access to a rhythm generator, so we downloaded GIFs to our phone of Vfib, Vtach, Aystole and normal sinus rhythm to display on our phones. This turned a partially functioning defibrillator into a monitor! We were also able to change the rhythm by picking a new GIF really easily.

Picture3.jpg

Wherever possible, add fidelity

While the technology was limited, the opportunity to bring human factors into the simulations were not. We used real bottles of medications, which helped the residents suspend some of their disbelief, and we encouraged them to verbalize the actual medication doses. We talked about safely labelling syringes so as not to mix which medication was in which syringe (sedation vs paralysis), and how to do a team time out before a critical intervention to ensure necessary supplies were available. We even simulated a broken laryngoscope by removing the battery to add a level of complexity to the case — if they didn’t check the laryngoscope ahead of time they wouldn’t have noticed.

Picture4

At its core, simulation should be fun

One of my most poignant memories from these simulation sessions is how much fun the residents had. We had to pause a few times because we were laughing so hard. These residents work extremely closely over their training, side by side and as a group for the majority of their on service time—they see more of each other than I feel like I’ve seen of my co-residents during my residency this far. I noticed that because of their extensive time together, they seem to have more personal relationships, and as such even in the ED they have more fun together. The residents all appreciated these sessions where they learned together, and really enjoyed each other’s company. Their joy refreshed and rejuvenated my love for simulation!

Simulation is an important teaching tool for learners in EM no matter where they are training. We take for granted our high tech sim labs, dedicated simulation curricula, and protected time to practice resuscitations and learn. Simulation offers the ability to make mistakes in a safe environment, to learn with our peers, and to develop an expertise that we can apply in the ED—something the residents in Addis Ababa really wanted to have as part of their ongoing curriculum. Applying my simulation training to a low resource setting has helped me grow as a simulationist and become pretty creative in how I approach resource limitations. I’m particularly grateful to the residents for not only being patient, keen, and enthusiastic as we worked through some of these challenges, but also for allowing me to take photos to post on this blog!

 

Validity – Starting with the Basics

This critique on validity and how it relates to simulation teaching was written by Alia Dharamsi, a PGY 4 in Emergency Medicine at The University of Toronto and 2017 SHRED [Simulation, Health Sciences, Resuscitation for the Emergency Department] Fellow.

When designing simulation exercises that will ultimately lead to the assessment and evaluation of a learner’s competency for a given skill, the validity of the simulation as a teaching tool should be addressed on a variety of levels. This is especially relevant when creating simulation exercises for competencies outside of the medical expert realm such as communication, team training and problem solving.

As a budding resuscitationist and simulationist, understanding validity is vital to ensuring that the simulation exercises that I create are actually measuring what they intend to measure, that is, they are valid (Devitt et al.).  As we look ahead to Competency Based Medical Education (CBME), it will become increasingly important to develop simulation exercises that are not only interesting and high-yield with respect to training residents in critical skills, but also have high validity with respect to reproducibility as well as translation of skills into real world resuscitation and patient care.

In order to better illustrate the various types of validity and how they can affect simulation design, I will present an example of an exercise I implemented as I was tasked with teaching a 5 year old to tie her shoelaces. In order to do so I taught her using a model, very similar to this one I found on Pinterest:

shoelacesWe first learned the rhyme, then used this template to practice over and over again. The idea behind using the model was to provide the reference of the poem right next to the shoes, but also to enlarge the scale of the shoes and laces, since her tiny feet meant tiny laces on shoes that were difficult for her to manipulte. Also, we could do this exercise at the table, which allowed us to be comfortable as we learned. At the end of the exercise, I gave her a “test” and asked her to tie the cardboard shoes to see if she remembered what we learned. While there was no rigorous evaluation

scheme, the standard was that she should be able to tie the knot to completion (competency), leading to two loops at the end.

I applied my simulation learning to this experience to asses the validity of this exercise in improving her ability to tie her laces. The test involved her tying these laces by herself without prompting.

Face validity: Does this exercise appear to test the skills we want it to?

Very similar to “at face value,” face validity is how much a test or exercise looks like it is going to measure what it intends to measure.  This can be assessed by an “outsider” perspective, like her mom if she feels that this test could measure her child’s ability to tie a shoe. Whether this test works or not is not the concern of face validity, rather it is whether it looks like it will work (Andale). Her mom thought this exercise would be useful in learning how to tie shoes, so face validity was achieved.

Content validity: Does the content of this test or exercise reflect the knowledge the learner needs to display? 

Content validity is the extent to which the content in the simulation exercise is relevant to what you are trying to evaluate (Hall, Pickett and Dagnone). Content validity requires an understanding of the content required to either learn a skill or perform a task. In Emergency Medicine, content validity is easily understood when considering a simulation exercise designed to teach learners to treat a Vfib arrest—the content is established by the ACLS guidelines, and best practices have been clearly laid out. For more nebulous skill sets (communication, complex resuscitations, rare but critical skills like bougie assisted cricothyroidotomies, problem solving, team training), the content is not as well defined, and may require surveys from experts, panels, and reviews by independent groups (Hall, Pickett and Dagnone). For my shoelace tying learner, the content was defined as being a single way to tie her shoelaces, however it did not include the initial lacing of the shoes or how to tell which shoe is right or left, and most importantly, the final test did not include these components. Had I tested her on lacing or appropriately choosing right or left, I would have not had content or face validity. This speaks to choosing appropriate objectives for a simulation exercise—objectives are the foundation upon which learners develop a scaffolding for their learning. If instructors are going to use simulation to evaluate learners, the objectives will need to clearly drive the content, and in turn the evaluation.

Construct Validity: Is test structured in a way that actually measures what it claims to?

In short, construct validity is assessing if you are measuring what you intend to measure.

My hypothesis for our exercise was that any measurable improvement in her ability to tie her shoelaces would be attributable to the exercise, and that with this exercise she would improve her ability to complete the steps required to tie her shoelaces. At the beginning of the shoelace tying exercise, she could pick up the laces, one in each hand, and then looked at me mostly blankly for the next steps. At the end of the exercise and for the final “test,” she was able to hold the laces and complete the teepee so it’s “closed tight” without any prompting. The fact that she improved is evidence to support the construct, however construct validity is an iterative process and requires different forms of evaluation to prove the construct. To verify construct validity, other tests with similar qualities can be used. For this shoelace tying exercise, we might say that shoelace tying is a product of fine motor dexterity and fine motor dexterity theory states that as her ability to perform other dexterity based exercises (tying a bow, threading beads onto a string) improves, so would her performance in her test. To validate our construct, we could they perform the exercise over time and see if her performance improves as her motor skills develop, or compare her performance on the test to an older child/adult who would have better motor skills and would perform better on the test.

External validity: Can the results of this exercise or study be generalized to other populations or settings, and if so, which ones?

With this shoelace tying exercise, should the results be tested and a causal relationship be established between this exercise and ability to tie shoes, then the next step would be to see if the results can be generalized to other learners in different environments. This would require further study and a careful selection of participant groups and participants to reduce bias. This would also be an opportunity to vary the context of the exercise, level of difficulty, and to introduce variables to see if the cardboard model could be extrapolated to actual shoe tying.

 Internal validity: Is there another cause that explain my observation?

With this exercise, her ability to tie laces improved over the course of the day. In order to measure internal validity, it is important to assess if any improvement or change in behaviour could be attributed to another external factor (Shuttleworth). For this exercise, there was only one instructor and one student in a consistent environment. If we had reproduced this exercise using a few novice shoelace tiers and a few different instructors it may add confounders to the experiment which would then make it less clear to assess if improvements in shoelace tying are attributed to the exercise or the instructors. Selection bias can also affect internal validity— for example selecting participants who were older (and therefore had more motor dexterity to begin with) or who had previous shoelace tying training would likely affect the outcome. For simulation exercises, internal validity can be confounded by multiple instructors, differences in the mannequin or simulation lab, as well as different instructor styles which may lead to differences in learning. Overcoming these challenges to internal validity is partly achieved by robust design, but also by repeating the exercise to ensure that the outcomes are reproducible across a wider variety of participants than the sample cohort.

There are many types of validity, and robust research projects require an understanding of validity to guide the initial design of a study or exercise. Through this exercise in validity I was able to better take the somewhat abstract concepts of face validity and internal validity and ground them into practice through a relatively simple exercise. I have found that doing this has helped me form a foundation in validity theory, which I can now expand into evaluating the simulation exercises that I create.

 

REFERENCES

1) Andale. “Face Validity: Definition and Examples.” Statistics How To. Statistics How to 2015. Web. October 20 2017.

2) Devitt, J. H., et al. “The Validity of Performance Assessments Using Simulation.” Anesthesiology 95.1 (2001): 36-42. Print.

3) Hall, A. K., W. Pickett, and J. D. Dagnone. “Development and Evaluation of a Simulation-Based Resuscitation Scenario Assessment Tool for Emergency Medicine Residents.” CJEM 14.3 (2012): 139-46. Print.

4) Shuttleworth, M. (Jul 5, 2009). Internal Validity. Retrieved Oct 26, 2017 fromExplorable.com: https://explorable.com/internal-validity

5) Shuttleworth, M. (Aug 7, 2009). External Validity. Retrieved Oct 26, 2017 from Explorable.com: https://explorable.com/external-validity

Simulation-Based Assessment

This critique on simulation-based assessment was written by Alice Gray, a PGY 4 in Emergency Medicine at The University of Toronto and 2017 SHRED [Simulation, Health Sciences, Resuscitation for the Emergency Department] Fellow.

You like to run simulations.  You have become adept at creating innovative and insightful simulations. You have honed your skills in leading a constructive debrief.  So what’s next? You now hope to be able to measure the impact of your simulation.  How do you design a study to measure the effectiveness of your simulation on medical trainee education?

There are numerous decisions to make when designing a sim-based assessment study. For example, who is included in the study?  Do you use direct observation or videotape recording or both? Who will evaluate the trainees? How do you train your raters to achieve acceptable inter-rater reliability? What are you assessing – team-based performance or individual performance?

One key decision is the evaluation tool used for assessing participants.  A tool ideally should:

  • Have high inter-rater reliability
  • Have high construct validity
  • Be feasible to administer
  • Be able to discriminate between different level of trainees

Two commonly used sim-based assessment tools are Global Rating Scales (GRS) and Checklists.  Here, these tools will be compared to evaluate their role for the assessment of simulation in medical education.

Global Rating Scales vs Checklists

GRS are tools that allow raters to judge participants’ overall performance and/or provide an overall impression of performance on specific sub-tasks.1   Checklists are lists of specific actions or items that are to be performed by the learner.  Checklists prompt raters to attest to directly observable actions. 1

Many GRS ask raters to utilize a summary to rate overall ability or to rate a “global impression” of learners.  This summary item can be a scale from fail to excellent, as in Figure 1.2 Another GRS may assess learners’ abilities to perform a task independently by having raters mark learners on a scale from “not competent” to “performs independently”.  In studies, the overall GRS has shown to be more sensitive at discriminating between level of experience of learners than checklists.3,4,5  Other research has shown that GRS demonstrate superior inter-item and inter-station reliability and validity to checklists.16,7,8 GRS can be used across multiple tasks and may be able to better measure expertise levels in learners. 1

 Some of the pitfalls of GRS are that they can be quite subjective.  They also rely on “expert” opinion in order to be able to grade learners effectively and reliably.

GRS

Figure 1: assessment tool used by Hall et al in their study evaluating a simulation-based assessment tool for emergency medical residents using both a checklist and global assessment rating.2

Checklists, on the other hand, are thought to be less subjective, though some studies may argue this is false as the language used in the checklist can be subjective.10 If designed well, however, checklists provide clear step-by-step outlines for raters to mark observable behaviours.  A well-designed checklist would be easy to administer so any teacher can use it (and not rely on experts to administer the tool).  By measuring defined and specific behaviours, checklists may help to guide feedback to learners.

However, some pitfalls of checklists are that high scores have not been shown to rule out “incompetence” and therefore may not be accurate at evaluating skill level. 9.10 Checklists may also comment on multiple areas of competence, which may attribute to lower-item reliability.1  Other studies have found that despite checklists being theoretically easy to use, the inter-rater reliability was consistently low.9   However, a systematic review of the literature found that checklists performed similarly high to GRS in terms of inter-rater reliability. 1

 

TABLE 1: Pros and Cons of Global Rating Scales and Checklists
 

                  +

                   –

Global Rating Scores

 

§  Higher internal reliability

§  More sensitive in defining level of training

§  Higher inter-station reliability and generalizability

 

§  Less precise

§  Subjective rater judgement and decision making

§  May require experts  or more rater training in order to rate learners

Checklists

 

§  Good for the measurement of defined steps or specific components of performance

§  Possible more objective

§  Easy to administer

§  Easy to identify define actions for learner feedback

§  Possibly lower reliability

§  Requires dichotomous ratings, possibly resulting in loss of information

 

 

 

 

 

Conclusion

With the move towards competency-based education, the use of simulation will play an important role in evaluating learners’ competencies.  Simulation-based assessments allows for direct evaluation of individuals knowledge, technical skills, clinical reasoning, and teamwork. Assessment tools play an important component of medical education.

An optimal assessment tool for evaluating simulation would be reliable, valid, comprehensive, and allow for discrimination between learners abilities.  Global Rating Scales and Checklists each have their own advantages and pitfalls and each may be used for the assessment of specific outcome measures.  Studies suggest that GRS have some important advantages over checklists, yet the evidence for checklists appears slightly improved than previously thought.  Yet, whichever tool is chosen, it is critical to design and test the tool to ensure that it appropriately assesses the desired outcome.   If feasible, using both a Checklist and Global Rating Scale would help to optimize the effectiveness of the sim-based education.

 

REFERENCES

1           Ilgen JS et al. A systematic review of validity evidence for checklists versus global rating scales in simulation-based assessment. Med Educ. 2015 Feb;49(2):161-73

2           Hall AK. Development and evaluation of a simulation-based resuscitation scenario assessment tool for emergency medicine residents. CJEM. 2012 May;14(3):139-46

3           Hodges B et al.  Analytic global OSCE ratings are sensitive to level of training. Med Educ. 2003;37:1012–6

4           Morgan PJ et al. A comparison of global ratings and checklist scores from an undergraduate assessment using an anesthesia simulator. Acad Med. 2001;76(10) 1053-5

5           Tedesco MM et al. Simulation-based endovascular skills assessment: the future of credentialing? J Vasc Surg. 2008 May;47(5):1008-11

6           Hodges B at al. OSCE checklists do not capture increasing levels of expertise. Acad Med. 1999;74:1129–1134

7           Hodges B and McIlroy JH. Analytic global OSCE ratings are sensitive to level of training. Med Educ. 2003;37:1012–1016

8           Regehr G et al. Comparing the psychometric properties of checklists and global rating scales for assessing performance on an OSCE-format examination. Acad Med. 1998;73:993-7

9           Walsak A et al. Diagnosing technical competence in six bedside procedures: comparing checklists and a global rating scale in the assessment of resident performance. Acad Med. 2015 Aug;90(8):1100-8

10          Ma IW et al. Comparing the use of global rating scale with checklists for the assessment of central venous catheterization skills using simulation. Adv Health Sci Educ Theory Pract. 2012;17:457–470

 

Simulation Design

This critique on simulation design was written by Alice Gray, a PGY 4 in Emergency Medicine at The University of Toronto and 2017 SHRED [Simulation, Health Sciences, Resuscitation for the Emergency Department] Fellow.

Have you ever designed a simulation case for learners? If so, did you create your sim on a “cool case” that you saw?  I think we have all been guilty of this; I know I have. Obviously a unique, interesting case should make for a good sim, right?  And learning objectives can be created after the case creation?

Recently, during my Simulation, Health Sciences and Resuscitation in the ED fellowship (SHRED), I have come to discover some theory and methods behind the madness of creating sim cases. And I have pleasantly discovered that rather than making things more complicated, having an approach to sim creation can not only help to guide meaningful educational goals but also makes life a whole lot easier!

I find it helpful to think of sim development in the PRE-sim, DURING-sim, and POST-sim phases.

In a systematic review of simulation-based education, Issenberg et al, describe the 10 aspects of simulation interventions that lead to effective learning, which I will incorporate these the different phases of sim design.1

PRE-sim

 Like many things, the bulk of the work and planning are required in the PRE phase.

When deciding to use sim or not as a learning tool, the first step should be to ask what modality is most appropriate based on the stated learning objectives?1 A one-sized fits all approach is not optimal for learning. This is stated well in a paper by Lioce et al about simulation design that the “modality is the platform of the experience”.2 For me, one of the most important things to take into consideration is the following: can the learning objectives be appropriately attained though simulation, and if so, what type of simulation?  For example, if the goal is to learn about advanced airway adjuncts, this may be best suited by repetitive training on an airway mannequin or a focused task trainer. If the goal is to work through a difficult airway algorithm, perhaps learners should progress through cases requiring increasingly difficult airway management using immersive, full-scale simulation.  You can try in-situ inter-professional team training to explore systems-based processes.  Basically, a needs assessment is key. The paper by Lioce et al. describe guidelines when working through a needs assessment.2

 Next, simulation should be integrated into an overall curriculum to provide the opportunity to engage in repetitive (deliberate) practice:1 Simulation in isolation may not produce effective sustainable results.3  An overall curriculum development, while time consuming to develop and implement, is a worthy task.  Having one simulation build upon others may improve learning through spaced repetition, varying context, delivery and level of difficulty.

This can be difficult to achieve given constrained time, space and financial resources.  Rather than repeat the same cases multiple times, Adler et al created cases that had overlapping themes; the content and learning objectives differed between the cases but they had similar outcome measures. 3 This strategy could be employed in curriculum design to enhance repeated exposure while limiting the number of total sessions required.

Effective programmatic design should facilitate individualized learning and provide clinical variation: 1 Lioce et al, refer to a needs assessment as the foundation for any well-designed simulation.2 Simulation has addressed certain competencies residents are supposed to master – airway, toxicology, trauma, pediatrics, etc – without seeking input a priori on the learning needs of the residents. It may be valuable to survey participants and design simulations based on perceived curriculum gaps or learning objectives or try to assess baseline knowledge with structured assessment techniques prior to designing cases and curricula. (NB: Such a project is currently underway, led by simulation investigators at Sunnybrook Hospital in Toronto).

 Learners should have the opportunity to practice with increasing levels of difficulty:1 It is logical that learners at different stages of their training require different gradations of difficultly. Dr. Martin Kuuskne breaks down the development of simulation cases into their basic elements.  He advocates for thinking of each sim objective in terms of both knowledge and cognitive process.4

The knowledge components can divided into the medical and critical resource management (CRM), or more preferably, non-technical skills. 5 Medical knowledge objectives are self-explanatory and should be based on the level of trainee. Non-technical skills objectives typically relate to team-based communication, leadership, resource utilization, situational awareness and problem solving.6  Kuuskne’s post makes the very salient point that we need to limit the number of objectives in both these domains as this can quickly overwhelm learners and decreased absorption of knowledge.

The cognitive processes objectives can also be developed with increasing complexity, depending on the level of the trainee.4  For example, at the lowest level of learning is “remembering” – describing, naming, repeating, etc.   At the highest levels of learning is “creating” – formulate, integrate, modify, etc.  A case could be made to involve senior learners in creating and implementing their own sim cases.

DURING-sim

 As part of creating scripts and cases, case designers should try to anticipate learner actions and pitfalls.  There will always be surprises and unexpected actions (a good reason to trial, beta test and revise before deploying). On EMSimCases.com, Kuuskne outlines his approach to creating the case progression, and how can it be standardized.6  The patient in the simulation has a set of definite states: i.e. the condition of the patient created by vital signs and their clinical status.6  We can think of progression to different states through learner modifiers and triggers: Modifiers are actions that make a change in the patient, whereas triggers are actions that changes the state of the patient.  I found this terminology helpful when outlining case progression.

Simulation allows for standardization of learning in a controlled environment: 11 The truth of residency training is that even in the same program, residents will all have uniquely different experience.  One resident ahead of me, at graduation, had taken part in 10 resuscitative thoracotomies.  Many residents in the same class had not seen any.  We cannot predict what walks through our doors but we can try to give residents the same baseline skills and knowledge to deal with whatever does.

POST-sim

 Feedback is provided during the learning experience1 unless in an exam-type setting, where it should be given after.  It is important again to note the necessity of limiting the number of learning objectives, so you have room for scripted and unscripted topics of conversation.  Debriefing the case should be a breeze, as it should flow from the case objectives created at the beginning.

Going further than “the debrief” is the idea of how we evaluate the value of sim. To me, this is the most difficult and rarely done.  Evaluation of each sim case should be sought from participants and stakeholders, in addition to the pilot testing.  That information needs to be fed forward to make meaningful improvements in case design and implementation.

Outcomes or benchmarks should be clearly defined and measured.  The randomized study by Adler et al created clearly defined critical rating checklists during the development and needs assessment of their sim cases. 3 They then tested each case twice on residents to get feedback.

In summary, although a “cool case” is always interesting, it doesn’t always make the best substrate for teaching and learning in the simulator.  Thoughtful case creation for simulation needs to go beyond that, breaking down the design process into basic, known components and using a structured theory-based approach in order to achieve meaningful educational outcomes.

REFERENCES:

1               Issenberg et al. Features and uses of high-fidelity medical simulations that lead to effective learning: A BEME systematic review. Med Teach. 2005;27:10 –28.

2               Lioce et al. Standards of Best Practice: Simulation Standard IX: Simulation Design.  Clinical Stimulation in Nursing. 2015;11:309-315.

3               Adler et al. Development and Evaluation of a Simulation-Based Pediatric Emergency Medicine Curriculum. Academic Medicine. 2009;84:935-941.

4               Kuuskne M. How to develop targeted simulation learning objectives – Part 1: The Theory. April 21, 2015 https://emsimcases.com/2015/04/21/how-to-develop-targeted-simulation-learning-objectives-part-1-the-theory/

5               Kuuskne M. How to develop targeted simulation learning objectives – Part 2: The Practice. June 15, 2015. https://emsimcases.com/2015/06/16/how-to-develop-targeted-simulation-learning-objectives-part-2-the-practice/

6               Kuuskne M. Case Progression: states, modifiers and triggers. May 19, 2015. ​https://emsimcases.com/2015/05/19/case-progression-states-modifiers-and-triggers/

 

 

 

Cashing out by buying in – How expensive does a mannequin have to be to call a simulation “high fidelity?”

This critique on simulation fidelity was written by Alia Dharamsi, a PGY 4 in Emergency Medicine at The University of Toronto and 2017 SHRED [Simulation, Health Sciences, Resuscitation for the Emergency Department] Fellow.

How expensive does a mannequin have to be to call a simulation “high fidelity?”

mannequin

That was the question I was pondering this week, as our SHRED theme this month is simulation in medical education. In my 4th year of residency at University of Toronto, most of my simulation training has been in one of our two simulation labs, using one of our three “high fidelity” mannequins. However, even though the simulation labs and equipment have been very consistent over the past few years, I have found a fluctuating attentiveness and “buy-in” to these simulation sessions: some have felt very real and have resulted in a measurable level of stress and urgency to improve the patient’s (read: mannequin’s) outcome while others have felt like a mandatory hoop through which to jump in order to pass a rotation.

It should not come to anyone’s surprise to note that in Emergency Medicine, simulation is a necessary part of our development as residents. Simulation based medical education allows trainees to meet standards of care and training, mitigates risks to patients, develops clinical competencies, improves patient safety, aids in managing complex patient encounters, and protects patients [1]. Furthermore, in emergency medicine, simulation has allowed me to practice rare and life-saving critical skills like cricothyroidotomies and thoracotomies before employing them in real-time resuscitations. Those who know me will tell you when it comes to simulation I fully support its use as an educational tool, but there does still seem to be an ebb and flow to how much I commit to each sim case that I participate in as a learner.

During a CCU rotation,  I was involved in a relatively simple “chest pain” simulation exercise. As the circulating resident, I was tasked with giving the patient ASA to chew. In that moment I didn’t just simulate giving ASA; I took the yellow lid from an epinephrine kit (it looked like a small circular tablet) and put it in the mannequin’s mouth asking him to chew it. I did not think much of it until our airway resident was preparing to intubate, and the whole case derailed into an “ airway foreign body” scenario—to the confusion of the simulationists sitting behind the window who didn’t know how that foreign body got into the airway in the first place. Why did I do that? I believe it’s because I bought into the scenario, and in my eyes that mannequin was my patient, and my patient needed the ASA to chew. The case of a chest pain—although derailed into a difficult airway case by my earnest delivery of medications—was in the context of a residency rotation where I was expected to manage the CCU independently overnight. That context allowed me to buy-into the case because I knew these skills were transferrable to my role as a CCU resident. My buy-in has had less to do with the mannequin and the physical space and everything to do with how the simulation fit into the greater context of my current training.

There has been discussion amongst simulationists that there should be a frame shift away from fidelity and towards educational effectiveness: helping to engage learners, providing framework and context to aid them in suspending their disbelief, and providing structure to apply the simulation to real-time resuscitations in order to enhance learner engagement [2]. The notion of functional fidelity is one that resonates with me as a budding simulationist; if a learner has an educational goal and is oriented to how the simulation will provide the context and platform to learn that goal, the learner may more easily “project fidelity onto the simulation scenario.” That is, the learner will buy-into the simulation [2].

 So how do we facilitate buy-in?

We can start by orienting learners meaningfully and intentionally to the simulation exercises. [3] This can be accomplished by demonstrating how the concepts from the simulation are transferrable to other contexts which can allow the learners to engage on a deeper level with the simulation and see the greater applicability of what they are learning [2].  We can’t assume learners understand why or how this exercise is applicable to them. A chest pain case for a senior resident in emergency medicine has very different learning outcomes than the same case for an off service junior resident rotating through the ER; the same can be said for a resident primarily working in the hospital or working in an outpatient clinic. Tailoring case objectives to learners specifically provides an opportunity to provide relevant skills to learners in the context of their training, giving them a reason to buy-in to the scenario session. Moving beyond “to learn…” or “to outline the management of…”, I would advocate that specifically outlining objectives for the level and specialties of participating learners will help them see the employability of the skills they gain in the simulation.

We can also use those specific objectives and context we start the simulation session with to foster a more directed debrief. The post-simulation discussion should not only cover medical management principles but also specific discussion about what learners would do if they encountered a similar situation in their specific work environment (clinic, ward, etc), transferring the learning out of the simulation lab and into real world medical practice.

If we are going to see simulation as a tool, let’s see it as one of those fancy screwdrivers with multiple bits, and stop trying to use the screwdriver handle as a hammer for every nail. No one mannequin, regardless of how expensive and how many fancy features it has, can replace the role of a thoughtful facilitator who can help learners buy-into the simulation. If facilitators take the time to orient the learner to their specific learning objectives and then reinforce that context in the debrief discussion, they can increase the functional fidelity of the session and aid learners in maximizing their benefit from each simulation experience.

 

Citations 

  1. Ziv, A., Wolpe, P. R., Small, S. D., & Glick, S. (2003). Simulation-Based Medical Education. Academic Medicine, 78(8), 783-788. doi:10.1097/00001888-200308000-00006
  2. Hamstra, S. J., Brydges, R., Hatala, R., Zendejas, B., & Cook, D. A. (2014). Reconsidering Fidelity in Simulation-Based Training. Academic Medicine, 89(3), 387-392. doi:10.1097/acm.0000000000000130
  3. Issenberg, S. B., Mcgaghie, W. C., Petrusa, E. R., Gordon, D. L., & Scalese, R. J. (2005). Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Medical Teacher, 27(1), 10-28. doi:10.1080/01421590500046924

 

Aortic Dissection

This case was written by Dr. Martin Kuuskne who is one of the editors-in-chief at EMSimCases and is an attending Emergency Medicine Physician at University Health Network in Toronto.

Why it Matters

Aortic Dissection is one of the most deadly causes of chest pain for the emergency physician. Its presentation, methods of diagnosis, management and complications are varied and demand critical thinking, clear communication and teamwork. This case highlights the following points:

  1. The key elements of the history, physical exam and initial investigations that support the diagnosis of aortic dissection.
  2. The importance of managing hypertension in the setting of aortic dissection, including specific blood pressure and heart rate targets.
  3. The need to set priorities dynamically as a patient becomes unstable and requires ACLS care.

Clinical Vignette 

You are working the day shift at a tertiary-care hospital. A 66-year-old female is being wheeled into the resuscitation bay with a history of a syncopal episode. No family members or friends are present with the patient.

Case Summary

A 66-year-old female with a history of smoking, HTN and T2DM presents with syncope while walking her dog. She complains of retrosternal chest pain radiating to her jaw. She will become increasingly bradycardic and hypotensive, requiring the team to mobilize resources in order to facilitate diagnosis and management of an aortic dissection.

Download the case here: Aortic Dissection

First EKG for the case: Sinus tachycardia

(EKG Source: http://i0.wp.com/lifeinthefastlane.com/wp-content/uploads/2011/12/sinus-tachycardia.jpg)

Second EKG for the case:

mobitz-1-stemi

(EKG Source: http://hqmeded-ecg.blogspot.ca/2012_09_01_archive.html)

CXR for the case:

(CXR Source: https://radiopaedia.org/articles/aortic-dissection)

VSA Megacode

This case is written by Dr. Cheryl ffrench, a staff Emergency Physician at the Health Sciences Centre in Winnipeg. She is the Associate Program Director and the Director of Simulation for the University of Manitoba’s FRCP-EM residency program; she is also on the Advisory Board of emsimcases.com.

Why it Matters

Leading a resuscitation is a core skill of an Emergency Physician. More often than not, we know very little about the patient’s history before orchestrating a team of nurses, respiratory technicians, residents and other team members to provide resuscitative care. Assessment of the cardiac rhythm and pulse allows us to start with ACLS algorithms in order to hopefully obtain return of spontaneous circulation (ROSC), initiate post-ROSC care and arrange for the appropriate disposition of the patient This case, which is geared toward junior learners, highlights the following:

  • The importance of resource allocation during a prolonged resuscitation
  • Managing the resuscitation team, ensuring effective communication and recognizing compression fatigue.
  • Providing high quality ACLS and post-ROSC care
  • Recognizing STEMI as the cause of the cardiac arrest and initiating disposition for percutaneous coronary intervention (PCI)

Clinical Vignette

A 54-year-old male police officer presents to the ED with chest pain. He played his normal weekend hockey game about two hours ago. He has been having retrosternal chest pain since the game ended. It improved with rest, but has not resolved completely. It is worse after walking into the department. He now feels dizzy, short of breath, and nauseous.

Case Summary

A 54-year-old male police officer presents to the ED complaining of chest pain for two hours that started after his weekend hockey game. He is feeling dizzy and short of breath upon presentation. He will have a VT arrest as he is placed on the monitor. He will require two shocks and rounds of CPR before he has ROSC. He will then loose his pulse again while the team is trying to initiate post-arrest care; this will happen several times. Finally, the team will maintain ROSC. When an ECG is performed, it is revealed that the patient has a STEMI and the team will need to call for emergent PCI.

Download the case here: VSA Megacode

ECG for the case found here:

anterolateral

(ECG source: http://cdn.lifeinthefastlane.com/wp-content/uploads/2011/10/anterolateral.jpg)

Post Intubation-CXR for the case found here:

normal-intubation2

(CXR source: https://emcow.files.wordpress.com/2012/11/normal-intubation2.jpg)

Tumour Lysis Syndrome

This case is written by Dr. Donika Orlich; a PGY5 Emergency Medicine resident at McMaster University who completed a fellowship in Simulation and Medical Education last year.

Why it Matters

Tumor Lysis Syndrome is a constellation of metabolic disturbances that can occur as a potentially fatal complication of treating cancers, most notably leukemias or solid rapidly-proliferating tumours. This case highlights the following:

  • The identification and management of severe hyperkalemia
  • The need to consider Tumour Lysis Syndrome as a diagnosis and order appropriate metabolic tests
  • Recognizing and initiating the treatment of severe hyperuricemia
  • Communicating with family members effectively during the treatment of a critically ill patient.

Clinical Vignette

A 72-year-old male presents to the emergency department complaining of general weakness for 2 days.  His wife called EMS and he was a STEMI patch to your hospital. He has been placed in the resuscitation bay.

Case Summary

A 72-year-old male is brought in as a “code STEMI” to the resuscitation bay. He was recently diagnosed with ALL and had chemotherapy 3 days ago for the first time. The patient is severely hyperkalemic, which must be initially recognized and treated, hypocalcemic and hyperuricemic as a result of Tumour Lysis Syndrome and the metabolic derangements must be stabilized until emergent hemodialysis is arranged.

Download the case here: Tumour Lysis Syndrome

ECGs for the case found here:

ecg90406-hyperkalaemia-pr-lengthens

(Source:  http://lifeinthefastlane.com/ecg-library/basics/hyperkalaemia/)

normal-sinus-rhythm

(Source:  http://cdn.lifeinthefastlane.com/wp-content/uploads/2011/12/normal-sinus-rhythm.jpg)

CXR for the case found here:

CXR

Unstable Bradycardia

This case was written by Dr. Martin Kuuskne from McGill University. Dr. Kuuskne is a PGY5 Emergency Medicine resident and one of the editors-in-chief at EMSimCases.

Why it Matters

High-degree AV blocks (second degree Mobitz type II and third degree AV block) rarely respond to atropine and necessitate the utilization of electromechanical pacing, IV chronotropic agents or both. This case highlights the following points:

  1. Anticipating for the deterioration of patient with an unstable bradycardia by early pacer pad placement and initiating transcutaneous pacing
  2. The use of IV chronotropic agents in the treatment of severe bradycardia
  3. Recognizing PEA in the deteriorating bradycardic patient

Clinical Vignette 

A 78-year-old male from a long-term care facility is being transferred to the emergency department for decreased mental status.

Case Summary

A 78-year-old male presents to the emergency department with an unstable bradycardia. The patient deteriorates from a second degree, Mobitz Type II-AV block into a third degree AV block requiring ACLS protocol medications, transcutaneous pacing, and ultimately transvenous pacing until definitive management with a permanent pacemaker can be arranged.

Download the case here: Bradycardia

First EKG for the case:

http://lifeinthefastlane.com/quiz-ecg-014/

Second EKG for the case:

3rd AVB

http://www.emedu.org/ecg/searchdr.php?diag=3d

CXR for the case here:

CXR

http://radiopaedia.org/

Bedside Ultrasounds for the case:

Intra-abdominal Sepsis

This case was written by Dr. Martin Kuuskne from McGill University. Dr. Kuuskne is a PGY5 Emergency Medicine resident and one of the editors-in-chief at EMSimCases.

Why it Matters

Although recent literature has challenged the use of protocolized care in the management of sepsis, this case highlights the key points that are crucial in early sepsis care, namely:

  • The recognition of sepsis and identifying a likely source of infection
  • The initiation of broad-spectrum antibiotics in the emergency department
  • Hemodynamic resuscitation with intravenous fluids and vasopressor therapy

Clinical Vignette 

You are working a day shift at a community hospital emergency department. You are handed a chart of a patient presenting with abdominal pain. You recognize the following vital signs: Heart rate 120, blood pressure 85/55, respiratory rate 20, and O2 Saturation 95%.

Case Summary

A 60-year-old male presents with a four-day history of abdominal pain secondary to cholangitis. The patient presents in septic shock requiring intravenous fluid resuscitation, empiric broad-spectrum antibiotics and vasopressor support and suffers a PEA arrest prior to disposition to advanced imaging or definitive management.

Download the case here: Cholangitis

ECG for case found here: 

Sinus tachycardia

(ECG source: http://cdn.lifeinthefastlane.com/wp-content/uploads/2011/12/sinus-tachycardia.jpg)

CXR for case found here: 

CXR

Ultrasound for case found here:

http://www.pocustoronto.com/wordpress/?p=264