Validity – Starting with the Basics

This critique on validity and how it relates to simulation teaching was written by Alia Dharamsi, a PGY 4 in Emergency Medicine at The University of Toronto and 2017 SHRED [Simulation, Health Sciences, Resuscitation for the Emergency Department] Fellow.

When designing simulation exercises that will ultimately lead to the assessment and evaluation of a learner’s competency for a given skill, the validity of the simulation as a teaching tool should be addressed on a variety of levels. This is especially relevant when creating simulation exercises for competencies outside of the medical expert realm such as communication, team training and problem solving.

As a budding resuscitationist and simulationist, understanding validity is vital to ensuring that the simulation exercises that I create are actually measuring what they intend to measure, that is, they are valid (Devitt et al.).  As we look ahead to Competency Based Medical Education (CBME), it will become increasingly important to develop simulation exercises that are not only interesting and high-yield with respect to training residents in critical skills, but also have high validity with respect to reproducibility as well as translation of skills into real world resuscitation and patient care.

In order to better illustrate the various types of validity and how they can affect simulation design, I will present an example of an exercise I implemented as I was tasked with teaching a 5 year old to tie her shoelaces. In order to do so I taught her using a model, very similar to this one I found on Pinterest:

shoelacesWe first learned the rhyme, then used this template to practice over and over again. The idea behind using the model was to provide the reference of the poem right next to the shoes, but also to enlarge the scale of the shoes and laces, since her tiny feet meant tiny laces on shoes that were difficult for her to manipulte. Also, we could do this exercise at the table, which allowed us to be comfortable as we learned. At the end of the exercise, I gave her a “test” and asked her to tie the cardboard shoes to see if she remembered what we learned. While there was no rigorous evaluation

scheme, the standard was that she should be able to tie the knot to completion (competency), leading to two loops at the end.

I applied my simulation learning to this experience to asses the validity of this exercise in improving her ability to tie her laces. The test involved her tying these laces by herself without prompting.

Face validity: Does this exercise appear to test the skills we want it to?

Very similar to “at face value,” face validity is how much a test or exercise looks like it is going to measure what it intends to measure.  This can be assessed by an “outsider” perspective, like her mom if she feels that this test could measure her child’s ability to tie a shoe. Whether this test works or not is not the concern of face validity, rather it is whether it looks like it will work (Andale). Her mom thought this exercise would be useful in learning how to tie shoes, so face validity was achieved.

Content validity: Does the content of this test or exercise reflect the knowledge the learner needs to display? 

Content validity is the extent to which the content in the simulation exercise is relevant to what you are trying to evaluate (Hall, Pickett and Dagnone). Content validity requires an understanding of the content required to either learn a skill or perform a task. In Emergency Medicine, content validity is easily understood when considering a simulation exercise designed to teach learners to treat a Vfib arrest—the content is established by the ACLS guidelines, and best practices have been clearly laid out. For more nebulous skill sets (communication, complex resuscitations, rare but critical skills like bougie assisted cricothyroidotomies, problem solving, team training), the content is not as well defined, and may require surveys from experts, panels, and reviews by independent groups (Hall, Pickett and Dagnone). For my shoelace tying learner, the content was defined as being a single way to tie her shoelaces, however it did not include the initial lacing of the shoes or how to tell which shoe is right or left, and most importantly, the final test did not include these components. Had I tested her on lacing or appropriately choosing right or left, I would have not had content or face validity. This speaks to choosing appropriate objectives for a simulation exercise—objectives are the foundation upon which learners develop a scaffolding for their learning. If instructors are going to use simulation to evaluate learners, the objectives will need to clearly drive the content, and in turn the evaluation.

Construct Validity: Is test structured in a way that actually measures what it claims to?

In short, construct validity is assessing if you are measuring what you intend to measure.

My hypothesis for our exercise was that any measurable improvement in her ability to tie her shoelaces would be attributable to the exercise, and that with this exercise she would improve her ability to complete the steps required to tie her shoelaces. At the beginning of the shoelace tying exercise, she could pick up the laces, one in each hand, and then looked at me mostly blankly for the next steps. At the end of the exercise and for the final “test,” she was able to hold the laces and complete the teepee so it’s “closed tight” without any prompting. The fact that she improved is evidence to support the construct, however construct validity is an iterative process and requires different forms of evaluation to prove the construct. To verify construct validity, other tests with similar qualities can be used. For this shoelace tying exercise, we might say that shoelace tying is a product of fine motor dexterity and fine motor dexterity theory states that as her ability to perform other dexterity based exercises (tying a bow, threading beads onto a string) improves, so would her performance in her test. To validate our construct, we could they perform the exercise over time and see if her performance improves as her motor skills develop, or compare her performance on the test to an older child/adult who would have better motor skills and would perform better on the test.

External validity: Can the results of this exercise or study be generalized to other populations or settings, and if so, which ones?

With this shoelace tying exercise, should the results be tested and a causal relationship be established between this exercise and ability to tie shoes, then the next step would be to see if the results can be generalized to other learners in different environments. This would require further study and a careful selection of participant groups and participants to reduce bias. This would also be an opportunity to vary the context of the exercise, level of difficulty, and to introduce variables to see if the cardboard model could be extrapolated to actual shoe tying.

 Internal validity: Is there another cause that explain my observation?

With this exercise, her ability to tie laces improved over the course of the day. In order to measure internal validity, it is important to assess if any improvement or change in behaviour could be attributed to another external factor (Shuttleworth). For this exercise, there was only one instructor and one student in a consistent environment. If we had reproduced this exercise using a few novice shoelace tiers and a few different instructors it may add confounders to the experiment which would then make it less clear to assess if improvements in shoelace tying are attributed to the exercise or the instructors. Selection bias can also affect internal validity— for example selecting participants who were older (and therefore had more motor dexterity to begin with) or who had previous shoelace tying training would likely affect the outcome. For simulation exercises, internal validity can be confounded by multiple instructors, differences in the mannequin or simulation lab, as well as different instructor styles which may lead to differences in learning. Overcoming these challenges to internal validity is partly achieved by robust design, but also by repeating the exercise to ensure that the outcomes are reproducible across a wider variety of participants than the sample cohort.

There are many types of validity, and robust research projects require an understanding of validity to guide the initial design of a study or exercise. Through this exercise in validity I was able to better take the somewhat abstract concepts of face validity and internal validity and ground them into practice through a relatively simple exercise. I have found that doing this has helped me form a foundation in validity theory, which I can now expand into evaluating the simulation exercises that I create.

 

REFERENCES

1) Andale. “Face Validity: Definition and Examples.” Statistics How To. Statistics How to 2015. Web. October 20 2017.

2) Devitt, J. H., et al. “The Validity of Performance Assessments Using Simulation.” Anesthesiology 95.1 (2001): 36-42. Print.

3) Hall, A. K., W. Pickett, and J. D. Dagnone. “Development and Evaluation of a Simulation-Based Resuscitation Scenario Assessment Tool for Emergency Medicine Residents.” CJEM 14.3 (2012): 139-46. Print.

4) Shuttleworth, M. (Jul 5, 2009). Internal Validity. Retrieved Oct 26, 2017 fromExplorable.com: https://explorable.com/internal-validity

5) Shuttleworth, M. (Aug 7, 2009). External Validity. Retrieved Oct 26, 2017 from Explorable.com: https://explorable.com/external-validity

Simulation-Based Assessment

This critique on simulation-based assessment was written by Alice Gray, a PGY 4 in Emergency Medicine at The University of Toronto and 2017 SHRED [Simulation, Health Sciences, Resuscitation for the Emergency Department] Fellow.

You like to run simulations.  You have become adept at creating innovative and insightful simulations. You have honed your skills in leading a constructive debrief.  So what’s next? You now hope to be able to measure the impact of your simulation.  How do you design a study to measure the effectiveness of your simulation on medical trainee education?

There are numerous decisions to make when designing a sim-based assessment study. For example, who is included in the study?  Do you use direct observation or videotape recording or both? Who will evaluate the trainees? How do you train your raters to achieve acceptable inter-rater reliability? What are you assessing – team-based performance or individual performance?

One key decision is the evaluation tool used for assessing participants.  A tool ideally should:

  • Have high inter-rater reliability
  • Have high construct validity
  • Be feasible to administer
  • Be able to discriminate between different level of trainees

Two commonly used sim-based assessment tools are Global Rating Scales (GRS) and Checklists.  Here, these tools will be compared to evaluate their role for the assessment of simulation in medical education.

Global Rating Scales vs Checklists

GRS are tools that allow raters to judge participants’ overall performance and/or provide an overall impression of performance on specific sub-tasks.1   Checklists are lists of specific actions or items that are to be performed by the learner.  Checklists prompt raters to attest to directly observable actions. 1

Many GRS ask raters to utilize a summary to rate overall ability or to rate a “global impression” of learners.  This summary item can be a scale from fail to excellent, as in Figure 1.2 Another GRS may assess learners’ abilities to perform a task independently by having raters mark learners on a scale from “not competent” to “performs independently”.  In studies, the overall GRS has shown to be more sensitive at discriminating between level of experience of learners than checklists.3,4,5  Other research has shown that GRS demonstrate superior inter-item and inter-station reliability and validity to checklists.16,7,8 GRS can be used across multiple tasks and may be able to better measure expertise levels in learners. 1

 Some of the pitfalls of GRS are that they can be quite subjective.  They also rely on “expert” opinion in order to be able to grade learners effectively and reliably.

GRS

Figure 1: assessment tool used by Hall et al in their study evaluating a simulation-based assessment tool for emergency medical residents using both a checklist and global assessment rating.2

Checklists, on the other hand, are thought to be less subjective, though some studies may argue this is false as the language used in the checklist can be subjective.10 If designed well, however, checklists provide clear step-by-step outlines for raters to mark observable behaviours.  A well-designed checklist would be easy to administer so any teacher can use it (and not rely on experts to administer the tool).  By measuring defined and specific behaviours, checklists may help to guide feedback to learners.

However, some pitfalls of checklists are that high scores have not been shown to rule out “incompetence” and therefore may not be accurate at evaluating skill level. 9.10 Checklists may also comment on multiple areas of competence, which may attribute to lower-item reliability.1  Other studies have found that despite checklists being theoretically easy to use, the inter-rater reliability was consistently low.9   However, a systematic review of the literature found that checklists performed similarly high to GRS in terms of inter-rater reliability. 1

 

TABLE 1: Pros and Cons of Global Rating Scales and Checklists
 

                  +

                   –

Global Rating Scores

 

§  Higher internal reliability

§  More sensitive in defining level of training

§  Higher inter-station reliability and generalizability

 

§  Less precise

§  Subjective rater judgement and decision making

§  May require experts  or more rater training in order to rate learners

Checklists

 

§  Good for the measurement of defined steps or specific components of performance

§  Possible more objective

§  Easy to administer

§  Easy to identify define actions for learner feedback

§  Possibly lower reliability

§  Requires dichotomous ratings, possibly resulting in loss of information

 

 

 

 

 

Conclusion

With the move towards competency-based education, the use of simulation will play an important role in evaluating learners’ competencies.  Simulation-based assessments allows for direct evaluation of individuals knowledge, technical skills, clinical reasoning, and teamwork. Assessment tools play an important component of medical education.

An optimal assessment tool for evaluating simulation would be reliable, valid, comprehensive, and allow for discrimination between learners abilities.  Global Rating Scales and Checklists each have their own advantages and pitfalls and each may be used for the assessment of specific outcome measures.  Studies suggest that GRS have some important advantages over checklists, yet the evidence for checklists appears slightly improved than previously thought.  Yet, whichever tool is chosen, it is critical to design and test the tool to ensure that it appropriately assesses the desired outcome.   If feasible, using both a Checklist and Global Rating Scale would help to optimize the effectiveness of the sim-based education.

 

REFERENCES

1           Ilgen JS et al. A systematic review of validity evidence for checklists versus global rating scales in simulation-based assessment. Med Educ. 2015 Feb;49(2):161-73

2           Hall AK. Development and evaluation of a simulation-based resuscitation scenario assessment tool for emergency medicine residents. CJEM. 2012 May;14(3):139-46

3           Hodges B et al.  Analytic global OSCE ratings are sensitive to level of training. Med Educ. 2003;37:1012–6

4           Morgan PJ et al. A comparison of global ratings and checklist scores from an undergraduate assessment using an anesthesia simulator. Acad Med. 2001;76(10) 1053-5

5           Tedesco MM et al. Simulation-based endovascular skills assessment: the future of credentialing? J Vasc Surg. 2008 May;47(5):1008-11

6           Hodges B at al. OSCE checklists do not capture increasing levels of expertise. Acad Med. 1999;74:1129–1134

7           Hodges B and McIlroy JH. Analytic global OSCE ratings are sensitive to level of training. Med Educ. 2003;37:1012–1016

8           Regehr G et al. Comparing the psychometric properties of checklists and global rating scales for assessing performance on an OSCE-format examination. Acad Med. 1998;73:993-7

9           Walsak A et al. Diagnosing technical competence in six bedside procedures: comparing checklists and a global rating scale in the assessment of resident performance. Acad Med. 2015 Aug;90(8):1100-8

10          Ma IW et al. Comparing the use of global rating scale with checklists for the assessment of central venous catheterization skills using simulation. Adv Health Sci Educ Theory Pract. 2012;17:457–470

 

Simulation Design

This critique on simulation design was written by Alice Gray, a PGY 4 in Emergency Medicine at The University of Toronto and 2017 SHRED [Simulation, Health Sciences, Resuscitation for the Emergency Department] Fellow.

Have you ever designed a simulation case for learners? If so, did you create your sim on a “cool case” that you saw?  I think we have all been guilty of this; I know I have. Obviously a unique, interesting case should make for a good sim, right?  And learning objectives can be created after the case creation?

Recently, during my Simulation, Health Sciences and Resuscitation in the ED fellowship (SHRED), I have come to discover some theory and methods behind the madness of creating sim cases. And I have pleasantly discovered that rather than making things more complicated, having an approach to sim creation can not only help to guide meaningful educational goals but also makes life a whole lot easier!

I find it helpful to think of sim development in the PRE-sim, DURING-sim, and POST-sim phases.

In a systematic review of simulation-based education, Issenberg et al, describe the 10 aspects of simulation interventions that lead to effective learning, which I will incorporate these the different phases of sim design.1

PRE-sim

 Like many things, the bulk of the work and planning are required in the PRE phase.

When deciding to use sim or not as a learning tool, the first step should be to ask what modality is most appropriate based on the stated learning objectives?1 A one-sized fits all approach is not optimal for learning. This is stated well in a paper by Lioce et al about simulation design that the “modality is the platform of the experience”.2 For me, one of the most important things to take into consideration is the following: can the learning objectives be appropriately attained though simulation, and if so, what type of simulation?  For example, if the goal is to learn about advanced airway adjuncts, this may be best suited by repetitive training on an airway mannequin or a focused task trainer. If the goal is to work through a difficult airway algorithm, perhaps learners should progress through cases requiring increasingly difficult airway management using immersive, full-scale simulation.  You can try in-situ inter-professional team training to explore systems-based processes.  Basically, a needs assessment is key. The paper by Lioce et al. describe guidelines when working through a needs assessment.2

 Next, simulation should be integrated into an overall curriculum to provide the opportunity to engage in repetitive (deliberate) practice:1 Simulation in isolation may not produce effective sustainable results.3  An overall curriculum development, while time consuming to develop and implement, is a worthy task.  Having one simulation build upon others may improve learning through spaced repetition, varying context, delivery and level of difficulty.

This can be difficult to achieve given constrained time, space and financial resources.  Rather than repeat the same cases multiple times, Adler et al created cases that had overlapping themes; the content and learning objectives differed between the cases but they had similar outcome measures. 3 This strategy could be employed in curriculum design to enhance repeated exposure while limiting the number of total sessions required.

Effective programmatic design should facilitate individualized learning and provide clinical variation: 1 Lioce et al, refer to a needs assessment as the foundation for any well-designed simulation.2 Simulation has addressed certain competencies residents are supposed to master – airway, toxicology, trauma, pediatrics, etc – without seeking input a priori on the learning needs of the residents. It may be valuable to survey participants and design simulations based on perceived curriculum gaps or learning objectives or try to assess baseline knowledge with structured assessment techniques prior to designing cases and curricula. (NB: Such a project is currently underway, led by simulation investigators at Sunnybrook Hospital in Toronto).

 Learners should have the opportunity to practice with increasing levels of difficulty:1 It is logical that learners at different stages of their training require different gradations of difficultly. Dr. Martin Kuuskne breaks down the development of simulation cases into their basic elements.  He advocates for thinking of each sim objective in terms of both knowledge and cognitive process.4

The knowledge components can divided into the medical and critical resource management (CRM), or more preferably, non-technical skills. 5 Medical knowledge objectives are self-explanatory and should be based on the level of trainee. Non-technical skills objectives typically relate to team-based communication, leadership, resource utilization, situational awareness and problem solving.6  Kuuskne’s post makes the very salient point that we need to limit the number of objectives in both these domains as this can quickly overwhelm learners and decreased absorption of knowledge.

The cognitive processes objectives can also be developed with increasing complexity, depending on the level of the trainee.4  For example, at the lowest level of learning is “remembering” – describing, naming, repeating, etc.   At the highest levels of learning is “creating” – formulate, integrate, modify, etc.  A case could be made to involve senior learners in creating and implementing their own sim cases.

DURING-sim

 As part of creating scripts and cases, case designers should try to anticipate learner actions and pitfalls.  There will always be surprises and unexpected actions (a good reason to trial, beta test and revise before deploying). On EMSimCases.com, Kuuskne outlines his approach to creating the case progression, and how can it be standardized.6  The patient in the simulation has a set of definite states: i.e. the condition of the patient created by vital signs and their clinical status.6  We can think of progression to different states through learner modifiers and triggers: Modifiers are actions that make a change in the patient, whereas triggers are actions that changes the state of the patient.  I found this terminology helpful when outlining case progression.

Simulation allows for standardization of learning in a controlled environment: 11 The truth of residency training is that even in the same program, residents will all have uniquely different experience.  One resident ahead of me, at graduation, had taken part in 10 resuscitative thoracotomies.  Many residents in the same class had not seen any.  We cannot predict what walks through our doors but we can try to give residents the same baseline skills and knowledge to deal with whatever does.

POST-sim

 Feedback is provided during the learning experience1 unless in an exam-type setting, where it should be given after.  It is important again to note the necessity of limiting the number of learning objectives, so you have room for scripted and unscripted topics of conversation.  Debriefing the case should be a breeze, as it should flow from the case objectives created at the beginning.

Going further than “the debrief” is the idea of how we evaluate the value of sim. To me, this is the most difficult and rarely done.  Evaluation of each sim case should be sought from participants and stakeholders, in addition to the pilot testing.  That information needs to be fed forward to make meaningful improvements in case design and implementation.

Outcomes or benchmarks should be clearly defined and measured.  The randomized study by Adler et al created clearly defined critical rating checklists during the development and needs assessment of their sim cases. 3 They then tested each case twice on residents to get feedback.

In summary, although a “cool case” is always interesting, it doesn’t always make the best substrate for teaching and learning in the simulator.  Thoughtful case creation for simulation needs to go beyond that, breaking down the design process into basic, known components and using a structured theory-based approach in order to achieve meaningful educational outcomes.

REFERENCES:

1               Issenberg et al. Features and uses of high-fidelity medical simulations that lead to effective learning: A BEME systematic review. Med Teach. 2005;27:10 –28.

2               Lioce et al. Standards of Best Practice: Simulation Standard IX: Simulation Design.  Clinical Stimulation in Nursing. 2015;11:309-315.

3               Adler et al. Development and Evaluation of a Simulation-Based Pediatric Emergency Medicine Curriculum. Academic Medicine. 2009;84:935-941.

4               Kuuskne M. How to develop targeted simulation learning objectives – Part 1: The Theory. April 21, 2015 https://emsimcases.com/2015/04/21/how-to-develop-targeted-simulation-learning-objectives-part-1-the-theory/

5               Kuuskne M. How to develop targeted simulation learning objectives – Part 2: The Practice. June 15, 2015. https://emsimcases.com/2015/06/16/how-to-develop-targeted-simulation-learning-objectives-part-2-the-practice/

6               Kuuskne M. Case Progression: states, modifiers and triggers. May 19, 2015. ​https://emsimcases.com/2015/05/19/case-progression-states-modifiers-and-triggers/

 

 

 

Cashing out by buying in – How expensive does a mannequin have to be to call a simulation “high fidelity?”

This critique on simulation fidelity was written by Alia Dharamsi, a PGY 4 in Emergency Medicine at The University of Toronto and 2017 SHRED [Simulation, Health Sciences, Resuscitation for the Emergency Department] Fellow.

How expensive does a mannequin have to be to call a simulation “high fidelity?”

mannequin

That was the question I was pondering this week, as our SHRED theme this month is simulation in medical education. In my 4th year of residency at University of Toronto, most of my simulation training has been in one of our two simulation labs, using one of our three “high fidelity” mannequins. However, even though the simulation labs and equipment have been very consistent over the past few years, I have found a fluctuating attentiveness and “buy-in” to these simulation sessions: some have felt very real and have resulted in a measurable level of stress and urgency to improve the patient’s (read: mannequin’s) outcome while others have felt like a mandatory hoop through which to jump in order to pass a rotation.

It should not come to anyone’s surprise to note that in Emergency Medicine, simulation is a necessary part of our development as residents. Simulation based medical education allows trainees to meet standards of care and training, mitigates risks to patients, develops clinical competencies, improves patient safety, aids in managing complex patient encounters, and protects patients [1]. Furthermore, in emergency medicine, simulation has allowed me to practice rare and life-saving critical skills like cricothyroidotomies and thoracotomies before employing them in real-time resuscitations. Those who know me will tell you when it comes to simulation I fully support its use as an educational tool, but there does still seem to be an ebb and flow to how much I commit to each sim case that I participate in as a learner.

During a CCU rotation,  I was involved in a relatively simple “chest pain” simulation exercise. As the circulating resident, I was tasked with giving the patient ASA to chew. In that moment I didn’t just simulate giving ASA; I took the yellow lid from an epinephrine kit (it looked like a small circular tablet) and put it in the mannequin’s mouth asking him to chew it. I did not think much of it until our airway resident was preparing to intubate, and the whole case derailed into an “ airway foreign body” scenario—to the confusion of the simulationists sitting behind the window who didn’t know how that foreign body got into the airway in the first place. Why did I do that? I believe it’s because I bought into the scenario, and in my eyes that mannequin was my patient, and my patient needed the ASA to chew. The case of a chest pain—although derailed into a difficult airway case by my earnest delivery of medications—was in the context of a residency rotation where I was expected to manage the CCU independently overnight. That context allowed me to buy-into the case because I knew these skills were transferrable to my role as a CCU resident. My buy-in has had less to do with the mannequin and the physical space and everything to do with how the simulation fit into the greater context of my current training.

There has been discussion amongst simulationists that there should be a frame shift away from fidelity and towards educational effectiveness: helping to engage learners, providing framework and context to aid them in suspending their disbelief, and providing structure to apply the simulation to real-time resuscitations in order to enhance learner engagement [2]. The notion of functional fidelity is one that resonates with me as a budding simulationist; if a learner has an educational goal and is oriented to how the simulation will provide the context and platform to learn that goal, the learner may more easily “project fidelity onto the simulation scenario.” That is, the learner will buy-into the simulation [2].

 So how do we facilitate buy-in?

We can start by orienting learners meaningfully and intentionally to the simulation exercises. [3] This can be accomplished by demonstrating how the concepts from the simulation are transferrable to other contexts which can allow the learners to engage on a deeper level with the simulation and see the greater applicability of what they are learning [2].  We can’t assume learners understand why or how this exercise is applicable to them. A chest pain case for a senior resident in emergency medicine has very different learning outcomes than the same case for an off service junior resident rotating through the ER; the same can be said for a resident primarily working in the hospital or working in an outpatient clinic. Tailoring case objectives to learners specifically provides an opportunity to provide relevant skills to learners in the context of their training, giving them a reason to buy-in to the scenario session. Moving beyond “to learn…” or “to outline the management of…”, I would advocate that specifically outlining objectives for the level and specialties of participating learners will help them see the employability of the skills they gain in the simulation.

We can also use those specific objectives and context we start the simulation session with to foster a more directed debrief. The post-simulation discussion should not only cover medical management principles but also specific discussion about what learners would do if they encountered a similar situation in their specific work environment (clinic, ward, etc), transferring the learning out of the simulation lab and into real world medical practice.

If we are going to see simulation as a tool, let’s see it as one of those fancy screwdrivers with multiple bits, and stop trying to use the screwdriver handle as a hammer for every nail. No one mannequin, regardless of how expensive and how many fancy features it has, can replace the role of a thoughtful facilitator who can help learners buy-into the simulation. If facilitators take the time to orient the learner to their specific learning objectives and then reinforce that context in the debrief discussion, they can increase the functional fidelity of the session and aid learners in maximizing their benefit from each simulation experience.

 

Citations 

  1. Ziv, A., Wolpe, P. R., Small, S. D., & Glick, S. (2003). Simulation-Based Medical Education. Academic Medicine, 78(8), 783-788. doi:10.1097/00001888-200308000-00006
  2. Hamstra, S. J., Brydges, R., Hatala, R., Zendejas, B., & Cook, D. A. (2014). Reconsidering Fidelity in Simulation-Based Training. Academic Medicine, 89(3), 387-392. doi:10.1097/acm.0000000000000130
  3. Issenberg, S. B., Mcgaghie, W. C., Petrusa, E. R., Gordon, D. L., & Scalese, R. J. (2005). Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Medical Teacher, 27(1), 10-28. doi:10.1080/01421590500046924

 

Moulage Tips and Tricks

This week’s post is written by Dr. Cheryl ffrench. She is the Director of Simulation for the Department of Emergency Medicine at the University of Manitoba and is also one of the advisory board members for EMSimCases.

Facial Trauma

Emergency Medicine is Sensory

Emergency Medicine is a very sensory specialty. Walking into the resuscitation room, the appearance, sound and sometimes smell of the patient provides a wealth of information before introductions are even made. Recognition of the “sick” patient is something we strive to teach our residents and medical students. Simulation is an excellent tool to help teach core emergency medicine skills. The principles of crisis resource management are essential to the practice of emergency medicine and simulation provides us with an excellent tool to bring them to light. However when the simulation stem describes an 80 year old female patient in respiratory failure and the learners walk into a room to find a manikin that more closely resembles a 25 year old Arnold Schwarzneger, despite being asked to suspend their disbelief, their approach to the patient can’t help but be different. Similarly when asked to assess the trauma patient, the visual cues of finding the stab wound or open fracture help to re-enforce both their clinical skills and the simulation experience. Most manikins used in simulation today are large robust health males in the prime of their simulated lives. However, this does not reflect the patient population of most emergency practices.

Simple Fixes to Improve Realism

Eldery man

Simple measures can turn a “he” into a “she” like remembering to exchange the external genitalia and adding a simple wig. Suddenly the patient has a more feminine appearance that reflects the other 50% of our patient population. A grey wig can make him (or her) age to an extent but investing in some costume masks found at any party store can take the manikin’s healthy 25 year old skin and give the illusion of a face wrinkled by time. These masks fit most standard adult size manikins. The softer and more form fitting the mask, the nicer it is for the learner to work with when intubating but even the less pliable masks have little impact on airway management so long as they come with an open mouth on the mask.

Moulage in Trauma

Body on SpineboardThe wounds and injuries that our trauma patient present with often dictate our index of suspicion for the severity of their illness and thus our level of concern. Seeing the bleeding wound in the centre of the chest or over the anterior neck raises a level of anxiety and serves as a constant reminder of the seriousness of the trauma. That is difficult to create if the learners are simply told by the confederate that there is a “big stab wound” or an “expanding hematoma” as these findings can be easily lost or forgotten without the visual reminder in the midst of a chaotic simulation case. Stab wounds can be easily added with some basic halloween “wounds” found at any halloween or party store. For the more creative, they can also be even more realistic though some simple techniques that are described at the end of this blog.

More Simple Adjuncts

The placement of a pregnant abdomen on the trauma patient provides another prompt for the unique management principles for that patient population. Place a fetal manikin in the belly and suddenly you have a perimortum csection case that will never be forgotten. Bubble wrap underneath the skin on the manikin’s neck creates the textile feel of subcutaneous emphysema which if also moulaged with bruising on the skin provides your learners with a frightening airway scenario that keeps most emergency practioners up at night. Moulage combined with either the use of preset vocals or some voice over acting will help to create a unique emergency medicine simulation experience that your learners won’t soon forget.

Mannequin Maintenance

When applying makeup to Mannequin skins, it is important to first prepare the area so that the makeup does not stain the skin. Here are some helpful tricks, courtesy of Jane Fedoruk, a Simulation Technician at the University of Manitoba:

  1. Wipe the area for application with a thin layer of Vaseline or baby oil.
  2. Lightly wipe again with a dry cloth to remove excess oil.
  3. Apply makeup lightly and avoid rubbing it into the pores of the skin.
  4. Avoid putting the makeup on until as late as possible. Leaving the colours on for extended periods of time increases the probability of a stain.
  5. As much as possible, use only products provided or sanctioned by the mannequin company.
  6. Be particularly careful when using red or blue based makeup as they stain the most.
  7. Remove the moulage as soon as possible after use, and clean the area with mannequin cleaner to remove the oils.

All photos courtesy of Cheryl ffrench and Jane Fedoruk.

Debriefing Techniques – the Art of Guided Reflection

Simulation without debriefing is really just an expensive way of either making learners feel badly about themselves or allowing learners to practice performing poorly. This is why the theory behind debriefing is so important.

Debriefing is one of the most amazing teaching tools available to an instructor. Debriefing allows insight into a learner’s thought process such that an instructor can tailor teaching to a learner’s specific needs. Kolb’s learning cycle1 and Schonn’s description of the Reflective Practitioner2 allow us to see why debriefing is such a useful tool. We must actively reflect on an experience to learn from it; debriefing allows educators to help guide that reflection.

PEARLS Framework

While debriefing is arguably the most important component of simulation education, it is also a difficult skill to acquire. Eppich and Cheng3 have published an excellent approach to debriefing that reviews many of the key steps a novice simulation educator should aim to follow. They have called it the PEARLS approach (Promoting Excellence and Reflective Learning in Simulation). We will review its four phases here.

1. Reactions Phase

This is where learners are invited to express their raw feelings about the case. Often, learners will do this without a formal invitation (for example, you may hear initial reactions while walking from the simulator to the debriefing room). It is important to invite all learners to have a chance to vent during this stage.

2. Description Phase

This phase begins by asking a learner to describe what they think the case was about. This allows the educator and the learners to see if they are on the same page. Often, this leads to important issues for discussion during the next phase.

Screen Shot 2015-06-28 at 1.15.24 PM

3. Analysis Phase

Here, the educator must tailor their style of debriefing to suit both the learners in the room and the time available for the debriefing. This phase is what educators often think about when they envision debriefing. Essentially, the analysis phase is where learners can go through guided reflection.

+/Δ Method

There are two common styles of guided reflection described. The first is the +/Δ method. This involves probing learners as to what went well (the +) and what could be improved or changed for the future (the Δ). Many who are new to debriefing find themselves turning to this style at first.

Advocacy/Inquiry Method

A second, commonly used style is called advocacy/inquiry.4 This approach leads to incredible insights into the knowledge and performance of the learners. It can be somewhat more challenging to execute well. The basic premise is that one must first describe a noted performance gap. This is followed by a question as to the learner’s frame of mind at the time of the performance. The learner’s answer leads the instructor as to what learning points may need to be addressed. Sometimes, the entire room of learners is unsure of a next appropriate step in management. In this case, the debriefer must simply provide directed teaching. In other cases, the learner has made a slight cognitive error. Often, these can be addressed through facilitated discussion with other learners.

4. Summary Phase

Once the group has gone through all the desired learning objectives in the analysis phase, it is imperative that the instructor guides a review of key points related to the objectives. If time is short, the instructor can provide the summary himself. If time is more abundant, it can be useful to have the learners go through their key learning points.

As we can see, a fair amount of effort is required to facilitate an excellent debrief. With frameworks like the PEARLS approach, experienced and inexperienced educators alike have a practical means upon which to build their debriefing skills.

What tips and tricks do you use in your debriefing?

References:

  1. Kolb DA. Experiential earning: experience as the source of learning and development. Englewood Cliffs, NJ: Prentice Hall; 1984.
  2. Schon D. The Reflective Practitioner: How Professionals Think in Practice. New York: Basic Books. 1983.
  3. Eppich, W., Cheng, A. Promoting excellence and reflective learning in simulation (PEARLS). Simul Healthc. 2015:1. doi:10.1097/SIH.0000000000000072.
  4. Rudolph, JW., Simon R., Rivard P., Dufresne RL., Raemer, DB. Debriefing with good judgment: combining rigorous feedback with genuine inquiry. Anesthesiol Clin. 2007;25(2):361-376.

How to develop targeted simulation learning objectives – Part 2: The Practice

In part 1 of this two part series (https://emsimcases.com/2015/04/21/how-to-develop-targeted-simulation-learning-objectives-part-1-the-theory/), we used the revised Bloom’s taxonomy to describe an approach to developing simulation-based learning objectives by targeting a specific, complex knowledge domain and a higher level cognitive process.

Now that we know the theory behind making targeted simulation learning objectives, what kind of learning objectives should be included in a team-based resuscitation simulation scenario?

Team based simulation can be used to learn and assess a variety of different components of resuscitation skills. These simulated events display the knowledge, skills and attitudes of learners in a controlled setting. What makes simulation different from other traditional models of learning is that it combines components of crisis resource management (CRM) with medical knowledge and skills into a complex educational event. Keeping this in mind, while developing objectives for a simulated scenario, it helps to separate the CRM and medical knowledge objectives. A separation of these two key components allows for targeted feedback directed at specific areas of the learners’ performance and aids in their assessment.

A common pitfall in the development of objectives for a simulated case is including too many of them! While there are a multitude of soft skills as well as medical decisions being made during the simulated event, both the learners and assessors benefit from having a limited amount of clear objectives. Debriefing after a simulation is critical for the learning experience and having too many objectives may dilute the main teaching points of the case. As an example, at the McGill University Emergency Medicine residency program, we aim for 2 CRM based objectives and 3 medical knowledge objectives. While this is in no way the rule, we have found that tailoring the case to a smaller number of clear and well-developed objectives allows for productive and high yield debriefing sessions.

Learning Objectives for a Tricyclic Antidepressant Overdose Case

Learning Objectives for a Tricyclic Antidepressant Overdose Case

As discussed in a previous post (https://emsimcases.com/2015/04/07/crisis-resource-management/), the main components of CRM include communication, leadership, resource utilization, situational awareness and problem solving.1 A case can be specifically tailored toward a CRM objective or vice versa. For example, an objective focusing on resource utilization and triage can guide the development of a simulated case with two patients in a resource-limited setting. Conversely, a simulated STEMI case can include an objective focusing on leadership and the team leader maintaining a global perspective of the case. There are no guidelines on which CRM based objectives to include, but ensuring that your cases utilize different CRM components allows your learners to focus on a few important skills at a time and ensures that your learners are exposed to each component of CRM in a simulated setting.

Medical objectives encompass the core medical content that the simulated case was designed to address. When developing the medical objective, remember to focus on a higher cognitive process, such as “applying” over “remembering”, and a higher-level knowledge domain, such as “procedural knowledge” that includes skills and algorithms. Again, there is no limit to what medical objectives you can include, as long as they are well developed and specific. When developing the medical objective for the case, it may help to take a step back and ask yourself “what do I want my learners to take away from this case?” It also helps to consider the training level of the learners, where simulation fits within your full educational curriculum as well as your setting and to develop the objectives accordingly. As an example, an airway case may contain an objective on the choice of an induction and paralytic agent for intubation for junior learners, whereas an objective on a “can’t intubate, can’t ventilate” situation may be more suitable for senior learners.

Defining learning objectives for your simulated scenarios is key for case development, debriefing and, ultimately, learning. Using theory, we can create targeted objectives that optimize the learning time spent in the simulated setting. Breaking up the objectives into CRM and medical knowledge while limiting the total number of objectives can help focus both the learner and educator on the teaching points from the case. Through careful consideration of learning objective development, simulation can be used to both fill potential gaps in you educational curriculum and to enhance the resuscitation skills, CRM skills and medical knowledge of your learners.

Take Home Points

1) Divide simulation objectives into CRM or medical objectives

2) Limit the number of objectives for each case

3) Apply theory to develop targeted and specific objectives to align them with the teaching strategy of simulation

4) Diversify your CRM objectives throughout your simulation curriculum

5) For medical objectives, ask yourself “what do I want my learners to take away from this case?”

6) Consider the training level, full training curriculum and setting when developing medical objectives.

  1. 1) Hicks CM, Kiss A, Bandiera GW, Denny CJ. Crisis Resources for Emergency Workers (CREW II): Results of a pilot study and simulation-based crisis resource management course for emergency medicine residents. Can J Emerg Med. 2012;14(Crew Ii):354-362. doi:10.2310/8000.2012.120580.

Simulation olympics: innovations that showcase EM resident resuscitation skills

Dagnone_DamonThis post is written by Dr. Damon Dagnone. Dr. Dagnone is an Assistant Professor in the Department of Emergency Medicine and the Faculty Lead of CBME for Postgraduate MedEd at Queen’s University. He is the Director of the Queen’s Simulation Olympics and is also the Co-Chair of the CAEP Simulation Olympiad. When not in the sim lab, the ER, or in meetings, he can be found chasing his kids and trying to enjoy his 40s. To contact Dr. Dagnone, email him at jdd1@queensu.ca

Background

Team-based simulation training has increasingly been utilized to train inter-professional teams throughout hospitals and medical training programs. The benefits of using simulation-based team training center around an adult learning approach, which offers its learners deliberate practice, context-dependent and experiential learning. Numerous studies have demonstrated the benefits of integrating simulation-based training with an inter-professional approach. Recent studies have shown that physicians trained with simulation provide a higher level of care in resuscitation/cardiac arrest, improve efficiency of team performance, and reduce the rate of medical error, thus minimizing patient harm.

The Queen’s Experience

In an effort to stimulate inter-professional team training in resuscitation within our EM residency program at our academic teaching hospital, an annual simulation-based resuscitation competition named “The Simulation Olympics” was launched as a pilot project following the 2010 Winter Olympics. The Simulation Olympics, now in its 6th year, has become a popular three-day simulation-based competition with associated preparatory training sessions, where inter-professional teams comprised of individuals from across our hospital compete against each other in standardized resuscitation scenarios.

http://meds.queensu.ca/education/simulation/simulation_olympics

Now in its sixth year, the Simulation Olympics competition has grown in scope and size with no less than 100 resident trainees, medical students, teaching faculty, staff RNs and RTs, paramedics, and technicians participating annually. With support from the Associate Dean of Postgraduate Medical Education, the CEO of Kingston General Hospital, and numerous PGME Program Directors (EM, Critical Care, IM, Anaesthesia, Pediatrics), the Simulation Olympics has permanent annual funding approaching $30 000. Far exceeding the original vision of creating a novel and fun atmosphere to learn for EM residents at Queen’s, the competition has served as a vehicle to promote the development and implementation of team-based simulation training initiatives at our academic teaching centre. It has become a fantastic annual showcase of awesome talent and grows more exciting each year.

The Extension to CAEP

CAEP 2014 Simulation Olympics

CAEP 2014 Simulation Olympics

Stemming from the success of the Simulation Olympics at Queen’s University, “The Simulation Olympiad at CAEP” was launched in 2012 in Niagara Falls. Six Emergency Medicine resident teams from across the country competed for the “national title”. Special thanks goes to Karen Woolfrey (Scientific Chair 2012), Vera Klein (Executive Director CAEP), and April Taylor (Taylor & Associates) for listening to my crazy scheme and sharing in the vision of what a Simulation Track/Simulation Olympiad competition could become. The resident team from McGill University was the inaugural winner in 2012, and following their lead, the University of Ottawa won in 2013 in Vancouver, and the University of Toronto won the 2014 competition in Ottawa.   This year eight teams from EM programs across Canada will be competing at CAEP (May 30th – June 3rd) for the 2015 national title. Good luck to all of them.

http://caep.ca/conference/caep-2015-lighting-way/simulation-olympiad

By many measurements, the Simulation Olympics competition at Queen’s and the Simulation Olympiad track at CAEP have been a success. This is evident in the positive feedback from participants and faculty involved, the funds generated to carry out both events, the involvement of numerous trainees, hospital staff, medical and nursing faculty, and the support of senior administrators at Queen’s University, Kingston General Hospital, and the CAEP organizing committee. Perhaps most importantly, both events have served as catalysts to bring together medical educators to develop, implement, and evaluate additional simulation-based team training initiatives.

Lessons Learned

The implementation of the Simulation Olympics and Olympiad has also come with numerous lessons learned. The organizational and funding framework required to execute this event, with respect to scheduling trainees, hospital staff, and acquiring multiple faculty, technician time and equipment, to make both events happen on an annual basis has been a constant challenge. With university, hospital, and simulation company budgets becoming less flexible with each passing year, the ability to offer innovative simulation-based educational programs depends upon keeping major stakeholders engaged in meaningful resuscitation team training initiatives. One secret I’ve learned over the years is to invite them to the events and let them see the action for themselves. Once exposed to the excitement and energy, they know there’s no turning back.

Future Directions

Moving forward, it is important to realize that with the right vision and enough hard work, faculty educators can develop and implement successful, well-executed, innovative, and well-funded educational projects. I can guarantee the participants (residents, students, faculty etc) will be extremely satisfied with the investment in their education. I encourage anyone interested to start thinking of how you might integrate meaningful interdisciplinary team training in resuscitation within your own EM training program.

One last thing…there’s lot of great simulation expertise in EM across Canada. Tap into the help and experience that’s out there. If you have any questions, please do not hesitate to contact me at CAEP or at any other time. I’d love to support you starting something new and great at your institution as it relates to simulation-based education. It’s well worth the time and effort!

Remember…Less talk, more do (my favourite sim slogan).

References

  1. Dagnone JD, Takhar A, Lacroix L. The Simulation Olympics: a resuscitation-based simulation competition as an educational intervention. CJEM 2012; 14(6), 363-368.
  2. Dagnone JD, McGraw R, Howes D, Messenger D, Bruder E, Hall A, Chaplin T, Szulewski A, Kaul T, O’Brien T. How we developed a comprehensive resuscitation-based simulation curriculum in emergency medicine. Med Teacher 2014; 1-6, Early Online.
  3. Villamaria FJ, Pliego JF, Wehbe-Janek H, et al. “Using Simulation to Orient Code Blue Teams to a New Hospital Facility.” Simulation in Healthcare: The Journal of The Society for Medical Simulation 3.4 (2008): 209-16.
  4. Walter E, Howard V, Vozenilek J, et al. “Simulation-based Team Training in Healthcare”. Simulation in Healthcare. August 2011:6(7):S14-S19.
  5. Capella J, Smith S, Philp A, et al. “Teamwork Training Improves the Clinical Care of Trauma Patients.”Journal of Surgical Education67.6 (2010): 439-43.
  6. Hunt EA, Shilkofski NA, Stavroudis TA, et al. “Simulation: Translation to Improved Team Performance.”Anesthesiology Clinics25.2 (2007): 301-19.
  7. Lighthall GK, Poon T, and Harrison TK. “Using in Situ Simulation to Improve in-Hospital Cardiopulmonary Resuscitation.”Joint Commission Journal on Quality & Patient Safety 36.5 (2010): 209-16.
  8. Long RE. “Using Simulation to Teach Resuscitation: An Important Patient Safety Tool.”Critical care nursing clinics of North America17.1 (2005): 1-8.
  9. http://caep.ca/conference/caep-2015-lighting-way/simulation-olympiad
  10. http://meds.queensu.ca/education/simulation/simulation_olympics
  11. http://www.resuscitationinstitute.org/index.cfm/education/simulation-olympics/

Case progression: states, modifiers and triggers

In order for a simulated scenario to run smoothly, the case progression needs to be planned for in advance. This involves determining which states the patient simulator progresses through, how modifiers may change features of those states and what triggers will be used to change between states. A working understanding of these terms makes developing cases a lot easier.

State

During a simulated resuscitation scenario, the patient progresses through multiple states. The state represents the overall condition of the patient simulator during a specific period of time. I like to think of a state as a constellation composed of the vital signs and the patient status (which includes the general appearance and relevant physical exam findings) that we can present to the learners. While case progression usually follows a linear route through different states, this is not the rule; the case may skip or jump to a different state depending how it is developed (see figure 1). Each state should be represented by a characteristic title.

Screen Shot 2015-03-22 at 12.58.55 PM

Figure 1. An example of a case progression. States 1 through 4 represent a linear progression. State 5. V-fib, is a possible simulator state, depending on the leaners’ actions. The green arrows represent unspecified triggers.

Modifier

A modifier is a learner action that induces a change to the patient simulator, but not enough to transition between states. These changes can affect either a vital sign or a component of the patient’s status, but usually not both. An example of a modifier would be the application of a 100%-non-rebreather mask to a patient in an “Acute Pulmonary Edema” state. As a modifier, this learner action would cause an increase in the patient simulator’s O2 saturation from 84% to 89%. However the state, “Acute Pulmonary Edema”, would not change. It would continue to be represented by sinus tachycardia at a rate of 120, a blood pressure of 180/105, a respiratory rate of 28 and a patient status represented by respiratory distress (accessory muscle use, pursed lipped breathing etc). A modifier can manifest its change instantly or over a specified amount of time (ex. increase the O2 saturation from 84% to 89% over 10 seconds).

Trigger

A trigger is an event that causes a change in the simulator state. I describe triggers as being either active or latent. Active triggers are represented by a learner action (ex. needle thoracostomy) or a specific combination of learner actions (ex. ≥2 methods of active cooling) while latent triggers are usually time-based (ex. 3 minutes). Active triggers are key to the progression of the case and make for great learning points during debriefing because they define important medical management decisions. Latent triggers are used to automatically progress the case. Like a modifier, a trigger can also be manifested instantly or over a specified amount of time.

EMSimCases case progression template

Figure 2. An example of a state, modifier and triggers using the EMSIMCASES case progression template

Figure 2. An example of a state, modifier and triggers using the EMSIMCASES case progression template

The EMSimCases template uses a table to display and facilitate case progression while running a simulation scenario (see Figure 2). The patient state is described in the first column with its title and vital signs. The patient status (general appearance and relevant physical exam findings) is described in the second column. A full physical exam is described in another section of the template. The third column lists possible learner actions. The fourth column contains the modifiers and triggers for that state.


Any simulation educator can tell you that no matter how much planning goes into case development, learners will always surprise you with an action that you did not predict. This highlights the importance of being able to adapt the case progression to unforeseen learner actions on the fly. However, if you develop cases with a logical progression of states, account for possible modifiers and how they will change features of those states and, lastly, define the triggers that will transition between states, your simulation scenario will be as smooth and realistic as ever.

Realism

What is it?

Realism is the degree to which your simulation environment recreates or mimics the patient environment for your learners.

A word on fidelity.

The terms realism and fidelity are essentially interchangeable. However, many often associate the term fidelity with the amount of technology used to recreate the patient environment. For example, when educators refer to a case as “high fidelity” what they often mean is that they are using a costly computer-based mannequin with all the bells and whistles. The caveat, of course, is that having cutting edge equipment does not, on its own, ensure that the learner’s experience approaches reality. I prefer the term realism because it reminds us that there are more things to simulate than just the physical environment.

Why it’s important.

The basic premise of simulation as an educational modality is that it allows direct observation of a learner’s behaviour. Furthermore, debriefing in simulation allows discussion about noted learner deficiencies. Teasing out the learner’s cognitive process and knowledge gaps to discover the origins of the learner’s behaviour is paramount. In order to elicit true behaviour from a learner, (i.e. – behaviour that most closely mirrors their performance with real patients) the learner must treat the situation as a real one. And to do so, they must believe in it.

If the environment in which the learner is practising does not even come close to imitating reality, then the learner will not fully engage in the learning exercise. This limits the ability of the instructor to assess the learner’s abilities. In addition, not addressing realism lets learners use it as an excuse for their performance. For example, “If the mannequin had better breath sounds, I would have decompressed the tension pneumothorax.” Or “If this case was in the Emergency Department, I’m sure I would have seen the VT on the monitor and then shocked the patient.”

Making the environment mirror reality does not necessarily require high tech equipment. It does, however, require engaging the learners and addressing limitations to realism before the scenario begins. Orient learners to the mannequin so they know where they can feel pulses and where to listen for breath sounds. If the mannequin doesn’t have these things, let the learners know how to ask for physical exam findings. It is remarkable how well learners can engage in a scenario with a mannequin that has no high tech functions. They are only able to do this if you create conceptual realism.

Types of realism

In 2007, Rudolph, Simon, and Raemer described three different types of realism as essential to simulation training.1 Their terminology was a slight modification of Dieckmann’s work on the aspects of realism, also published in 2007. 2 The three components of realism highlighted by Rudolph et al are as follows:

1) Conceptual

Conceptual realism allows learners to think about a case in the same manner they would for a real patient. The most important component to creating conceptual realism is providing the learners with enough information to accurately frame the case. For example, you would expect a patient with a tension pneumothorax to have tachycardia, hypotension, and decreased breath sounds on one side. How this information is conveyed matters less than the fact that the information is logical in the context of the case.

To understand the power of conceptual realism, look to oral exams. The learner is able to make a diagnosis and manage a patient without any physical cues present. Oral exams can create conceptual realism. Conceptual realism is crucial to a good simulation scenario. And sometimes, adding too many bells and whistles actually takes away from the concept.

Yes, that’s right. You can be very low tech and still run fantastic simulation. You just need to set the stage, meet minimum cognitive standards, and debrief.

2) Physical

There are some things that just need to be practised in real time and space. Physical realism is most important for procedural skills. Practising airway management on an airway head that has unrealistic anatomy just doesn’t help learners to develop the motor memory they need. This doesn’t mean that all simulators need to be exact replicas. But to create physical realism, a task trainer must emulate the necessary motor feedback required to practise a skill properly. For example, a chest tube trainer doesn’t need to be an entire pig chest. It does, however, need to have an appropriate degree of resistance so that learners develop the sense of how hard to push in order to penetrate the pleura.

All mannequins have poor physical realism in some way. But with enough cognitive and experiential realism, it doesn’t need to affect the quality of the learning experience.

3) Emotional and experiential

This is the type of realism that puts a knot in your stomach. Experiential realism is about creating the emotions that often make our jobs difficult. Examples would include having a mother sob in the corner while trying to run a code on her infant child. Or having a difficult parent present who becomes obtrusive to care. Or how horrifying it can be to see a patient with a GI bleed exsanguinating from their mouth. Perhaps the challenge is creating the cognitive burden that goes along with managing two patients at once. Or perhaps the experiential realism comes from the frustration of dealing with a team that is obviously ignoring your direction. In other words, experiential realism is important to consider if the purpose of a case is to practice working through an emotionally challenging case or to teach techniques for overcoming a difficult family member or team member. It is also an important part of why junior learners can find simulation intimidating – because good experiential realism recreates the fear or discomfort that goes with being uncertain how to manage a particular condition. Again, your mannequin can be a cabbage patch kid doll if your sobbing parent actor is good enough.

The reality of realism

Realism is essential to simulation. As a simulation educator, you should be aware of which aspects of realism are most important for the case you are designing. Do you need to create an appropriate cognitive environment to assess the resident’s management of a TCA overdose? Do you need to see how the resident can lead a difficult team? Or do you need to see that a resident can skilfully perform a cricothyroidotomy? Or do you need all three components to assess a resident’s management of a pediatric trauma? Design your case and supplies with your realism goals in mind.

References

  1. Rudolph JW, Simon R, Raemer DB. Which reality matters? Questions on the path to high engagement in healthcare simulation. Simul Healthc. 2007;2(3):161-163. doi:10.1097/SIH.0b013e31813d1035.
  2. Dieckmann P, Gaba D, Rall M. Deepening the theoretical foundations of patient simulation as social practice. Simul Healthc. 2007;2(3):183-193. doi:10.1097/SIH.0b013e3180f637f5.