Original Publication
Open Access

Clinical Reasoning in the Primary Care Setting: Two Scenario-Based Simulations for Residents and Attendings

Published: November 16, 2018 | 10.15766/mep_2374-8265.10773

Appendices

  • Clinical Reasoning in the Outpatient Setting Participant Workflow Diagram.pdf
  • Participant Expectations and Instructions.docx
  • Think-Aloud Instructions and Warm-Up.docx
  • Door Information for Diabetes.docx
  • Diabetes Standardized Patient Case.docx
  • Standardized Patient Rehearsal Guide Diabetes.docx
  • Diabetes Storyboard.docx
  • Supplies List for Diabetes.docx
  • Door Information for Angina.docx
  • Angina Standardized Patient Case.docx
  • Standardized Patient Rehearsal Guide for Angina.docx
  • Angina Storyboard.docx
  • Supplies List for Angina.docx
  • Standardized Patient Implementation Checklist for Diabetes.docx
  • Standardized Patient Implementation Checklist for Angina.docx
  • Postencounter Form.pdf
  • Cognitive Load Questionnaire.pdf
  • Scenario Authenticity Questionnaire.docx

All appendices are peer reviewed as integral parts of the Original Publication.

To view all publication components, extract (i.e., unzip) them from the downloaded .zip file.


Abstract

Introduction: We describe the development and implementation of tools medical educators or researchers can use for developing or analyzing residents’ through attending physicians’ clinical reasoning in an outpatient clinic setting. The resource includes two scenario-based simulations (i.e., diabetes, angina), implementation support materials, an open-ended postencounter form, and a think-aloud reflection protocol. Method: We designed two scenarios with potential case ambiguity and contextual factors to add complexity for studying clinical reasoning. The scenarios are designed to be used prior to an open-ended written exercise and a think-aloud reflection to elicit reasoning and reflection. We report on their implementation in a research context but developed them to be used in both educational and research settings. Results: Twelve physicians (five interns, three residents, and four attendings) considered between three and six differential diagnoses (M = 4.0) for the diabetes scenario and between three and nine differentials (M = 4.3) for angina. In think-aloud reflections, participants reconsidered their thinking between zero and 14 times (M = 3.5) for diabetes and zero and 11 times (M = 3.3) for angina. Cognitive load scores ranged from 4 to 8 (out of 10; M = 6.2) for diabetes and 5 to 8 (M = 6.6) for angina. Participants rated scenario authenticity between 4 and 5 (out of 5). Discussion: The potential case content ambiguity, along with the contextual factors (e.g., patient suggesting alternative diagnoses), provides a complex environment in which to explore or teach clinical reasoning.


Educational Objectives

As a result of participating in these scenarios, participants will:

  1. Practice gathering, analyzing, and interpreting information and evidence to formulate differential diagnoses and form a problem list and a leading diagnosis with supporting evidence.
  2. Practice planning and performing a focused physical exam to further support clinical reasoning.
  3. Practice developing and communicating a management plan.
  4. Reflect on their own clinical reasoning efforts and strategies by thinking aloud and observing their own video-recorded simulation.

Introduction

Clinical reasoning includes the gathering and synthesizing of information, interpretation of data (e.g., patient’s responses to diagnostic questions, lab or radiologic findings), generation and refinement of hypotheses, and problem representation, or the use of illness scripts.1-5 Clinical reasoning is vital to making an accurate diagnosis, eliciting appropriate management, and developing efficient therapeutic plans.1,6,7 Research examining clinical reasoning suggests it is a complex activity that relies on several factors, including the physician’s cognitive processes, knowledge derived from formal and informal experiences, and prior practice experiences (e.g., prior exposure to similar patients).5,7

Efforts to assess clinical reasoning use a variety of strategies. Among the most common are multiple-choice questions,8 case-based learning,9 and the integration of think-aloud reflections with video-based scenarios10 or virtual patient scenarios.11 In many of these examples, individuals are asked to imagine themselves as the hypothetical participant rather than engaging in their own clinical encounter. Live scenario-based simulations have also been reported, although less frequently.12-14 Moreover, save for one example,15 none of these scenario-based approaches are paired with a free-text, open-ended approach to assessment and reflection, which can offer deeper understanding of the process of reasoning.16,17

In addition, most of these efforts to support clinical reasoning are designed to support individuals still in their undergraduate training rather than health care professionals’ learning and development throughout their career, something recommended by the recent National Academies of Science report on improving diagnostic efforts.6

We aimed to create scenarios where we could examine how physicians with a range of experience levels organized their interview, physical exam, diagnostic ideas, and management choices when engaging with a single patient (portrayed by a standardized patient [SP]). We considered that scenario-based simulations, which use a narrative to guide participants’ engagement as they address a problem that needs to be explored or resolved,18 would encourage physician performance that would be similar to the actual clinical setting while allowing us to control for the known leading and differential diagnoses. We also considered that these scenarios would provide physician participants with the opportunity to engage in many of the component activities associated with clinical reasoning (e.g., information gathering, interpretation of diagnostic information, hypothesis generation, management plans).

Several authors argue that scenario-based simulations like these are ideal for exploring the complexities of clinical practice, such as clinical reasoning.3,19,20 For example, Elstein, Shulman, and Sprafka describe how they utilized scenario-based simulations to conduct an in-depth descriptive analysis of physicians’ behaviors while engaging with an SP.3 Dieckmann, Gaba, and Rall argue that scenario-based simulations are complex social endeavors that support interactions among health professionals (e.g., medical doctors), simulated participants (e.g., SPs), and other culturally relevant devices, such as diagnostic equipment.19 Kneebone, Scott, Darzi, and Horrocks suggest that simulations support the development of skills and knowledge within a context that represents many of the elements of professional clinical practice.20 The findings of a more recent descriptive analysis of scenario-based simulations suggest that they provide participants with an opportunity to make sense of a clinical situation because they support activities such as information gathering (e.g., diagnostic questioning, interpreting diagnostic findings) as well as carrying out patient management activities.21

Here, we describe the development, testing, design improvements, and implementation of two live scenario-based simulations (i.e., new onset diabetes, coronary artery disease presenting with angina), together with an open-ended written exercise and a think-aloud reflection protocol. We report on a single study of their use in a research context, but we developed them to be used in both educational and research settings. This suite of resources can be used to support researching or teaching residents’ and attending physicians’ clinical reasoning in an outpatient clinic setting.

Audience and Contribution
This resource was designed to assess the clinical reasoning of physicians with a range of experience and ability (i.e., residents to attendings). We describe the strategies we used to develop and test scenarios with the expressed intent of integrating diagnostic ambiguity (where a series of signs and symptoms could be attributed to more than one diagnosis)14 and contextual factors (referring to factors that may interact, such as patient, physician, and setting factors)10,22 as ways to increase scenario complexity. Furthermore, pairing the scenarios with two different reflective tools (the free-text clinical questions23 and open-ended think-aloud reflection16,17) allows for a range of reflective experiences through which researchers, instructors, and learners can explore clinical reasoning.

This publication adds to the growing body of resources in MedEdPORTAL supporting the development of clinical reasoning and similar concepts (i.e., diagnostic reasoning, diagnostic decision-making). For example, several current resources focus on teaching medical students explicit strategies to develop their clinical reasoning skills.24-28 Our suite of resources adds scenarios and a reflection protocol explicitly designed to support more experienced physicians by building in increased complexity. Many of the MedEdPORTAL resources currently available emphasize teaching strategies such as classroom-based case discussions,24 case vignettes supported by illness script worksheets,28 or case presentations of patients seen during a family medicine clerkship,25 to name just a few. Others focus on strategies faculty or peers can use to assess clinical reasoning in the clinical setting.29 Among simulation-based or SP-based scenarios, few focus explicitly on supporting everyday clinical decision-making; rather, they frame clinical reasoning as an activity that supports rarely occurring or high-risk/low-frequency diagnoses.30,31 Other live scenario simulations offered in MedEdPORTAL address either diabetes or angina,32-34 but none offer a pairing of different cases, allowing learners to discuss the challenges brought about by the specifics of case content.

Lastly, this resource builds upon prior work in MedEdPORTAL two ways: First, these scenarios take clinical reasoning skills out of the classroom or small-group context and offer individual-level practice opportunities. Second, the think-aloud protocol can be independently integrated with existing simulation or SP scenarios in addition to or in lieu of postsimulation debriefing. These cases and related tools offer much-needed instructional material for the outpatient primary care setting, as opposed to, for instance, the emergency setting.34

Methods

This section reports on the participatory design procedures and instructional features used to develop the scenarios, the measures and reflection tools used, the procedures and logistics for scenario implementation, and the casting and training of SPs.

Participatory Design Procedures
Participatory instructional design is an approach that encourages the inclusion and integration of the perspectives of diverse stakeholders.35 This approach allowed us to develop scenarios that could be reliably implemented by the simulation lab, that represented common patient conditions, and that would support analysis of language and behavioral patterns. A scenario was developed in three phases: initial design, pilot testing, and implementation evaluation with physician participants.

Initial design: This stage began by determining scenario goals and identifying stakeholders who could help develop scenarios to support the practice behaviors of physicians with diverse levels of expertise. Clinical stakeholders included resident and attending physicians practicing family medicine, internal medicine, and surgery. Among these individuals, most regularly taught or evaluated less experienced physicians and provided insight into common errors and practice behaviors. Simulation stakeholders included SP trainers, SPs, and operational specialists. Our stated goals were the following:

  1. To adapt two video-based scenarios representing common patient presentations in primary care (i.e., diabetes, angina) to the live scenario-based context.
  2. To ensure that both adapted scenarios contained diagnostic ambiguity, which we argued would provide participants with an opportunity to consider more than a single diagnosis.
  3. To embed contextual factors into one scenario to allow for more complex clinical reasoning and an opportunity to compare participant performance across the two cases.

We conducted design meetings with small groups of stakeholders to develop the scenario-based simulations (we refer the reader to the literature on activity theory for more information on our theoretical framework21,36,37). We adapted an existing context questionnaire36 to determine what participants’ goals or working hypotheses might be, what clinical tools and diagnostic artifacts they might request or rely upon, what clinical guidelines might influence their practice behaviors, and the roles and anticipated activities of other actors who might normally be present (e.g., patient care tech). In this way, we developed the simulation session workflow (Appendix A), participant expectations and instructions (Appendix B), door information (Appendices D for diabetes and I for angina), SP cases (final cases in Appendices E for diabetes and J for angina), and scenario storyboards (final storyboards in Appendices G for diabetes and L for angina).

Pilot testing: Following the initial design phase, we conducted a read-through, followed by a rehearsal of each scenario. After read-throughs and discussions with two physicians, two cast SPs, and the SP educator, we further revised pertinent medical history and social and family history and identified a series of scripted key statements for each patient case (Appendices E for diabetes and J for angina). The revision history of each design change was preserved through Google documents.

Implementation evaluation: In this phase, we examined scenario implementation with 12 physician participants who completed both scenarios, examining whether our design strategies resulted in physicians considering more than a single diagnosis, allowed participants to gather enough information to develop a management plan, and provided adequate complexity for interns, residents, and attendings. We conducted a content analysis of the postencounter forms (PEFs): free-text questions about leading and differential diagnoses, problem list, and management decisions (Appendix P).23 We also analyzed the think-aloud transcriptions for the presence of reflection (in particular, reconsidering prior stances and indicating uncertainty) to better understand the broad quality of clinical reasoning. Finally, we asked participants to rate their perceptions of scenario authenticity after they had completed the second think-aloud protocol (see Appendices A for workflow and R for authenticity item).

Instructional Design Features Used to Develop This Resource
We drew from Tschan and colleagues’ strategy of creating scenarios that introduce an ambiguous diagnostic situation, which they define as a series of symptoms and findings that could suggest more than one diagnosis (of note, each scenario was written as a straightforward presentation of the correct diagnosis being portrayed, validated by a group of expert physicians).14 For example, they designed scenarios where the SP’s signs and symptoms could have plausibly been attributed to anaphylaxis or tension pneumothorax, but they also included information in the scenario to allow a physician participant to rule out the incorrect diagnosis.14 We achieved this in our scenarios by incorporating history of present illness, past medical history, and social and family history into the case where the SP presented with symptoms of the leading diagnosis (i.e., diabetes or angina) but where some of the signs and symptoms could also be consistent with other conditions (e.g., urinary tract infection, indigestion). We hypothesized that this diagnostic ambiguity would generate relatively complex and authentic scenarios that could be used to support the learning of physicians across their careers.6

For one scenario (angina), in addition to diagnostic ambiguity, we introduced a contextual factor (diagnostic suggestion) to further increase complexity. Recent literature suggests that contextual factors like this may influence clinical reasoning performance in novice and expert clinicians alike, potentially introducing significant unwanted variance (error) in patient care.10 When contextual factors are introduced, a physician may see two patients with the same history, symptoms, and findings yet come to two different diagnostic decisions.10 We believed that the combination of ambiguity and a contextual factor in one of the cases would both be authentic and offer an opportunity to compare the two cases for relative complexity and challenge.

Selected Measures and Reflection Tools
The scenario development process described above was accompanied by a thoughtful selection of measures and reflection tools and included an open-ended PEF eliciting clinical reasoning, a think-aloud protocol for reflection on reasoning, and a cognitive load question to check for appropriate difficulty across participants. We describe each below.

PEF: To examine the clinical reasoning process (i.e., the steps to the diagnosis and management decisions), we used a previously published PEF that has been argued to be reliable and valid for assessing clinical reasoning (Appendix P).23 This measure asks for leading and differential diagnoses, additional interview questions or exam actions that participants would like to include, a problem list, supporting data for the leading diagnosis, and a management plan. We considered that this detailed open-ended measure would give us a good understanding of the process participants go through in coming to a diagnosis and treatment plan.

Think-aloud reflection: Asking someone to think aloud about a task, either concurrently or retrospectively, can provide insight into cognition and experience (see Appendix C for think-aloud warm-up and instructions).16,17 Moreover, thinking aloud has been used to great benefit in live simulation, offering a better understanding of reasoning and actions throughout the simulation.15 Unlike some other forms of reflection (e.g., debriefing), thinking aloud involves little to no feedback during the exercise.15-17,38 Instead, while watching a video of their own performance, participants are encouraged to provide almost a stream-of-consciousness reflection on their thoughts at the time of the scenario. The think-aloud literature advises the use of only minimal verbal prompting, such as “Keep talking,” “Uh-huh,” or “Think aloud,” if the participant pauses for more than 15-60 seconds.16,17,38 This retrospective thinking aloud not only reveals reasoning patterns but offers an opportunity for participants to strengthen their learning through reflection.1

Cognitive load: We examined participants’ cognitive load related to completing the PEF using a single question provided on a separate form (Appendix Q) adapted from Brünken, Seufert, and Paas.39 We assessed participants’ perceptions of their cognitive load after they completed each PEF by asking them to “please rate your invested mental effort after completing the postencounter form” on a scale ranging from 1 (very low mental effort) to 10 (very high mental effort). Due to participants’ range of years of experience, we included this question to check for adequate effort and engagement across participants.

Scenario Procedures and Logistics
Scheduling logistics: For each scheduled date, we requested two rooms in the simulation center. The first room was used to allow participants to complete the think-aloud warm-up and PEF and to rewatch their own video-recorded performance while thinking aloud. No special setup was required for the first room. The second room mimicked an outpatient clinic setting, including an exam table, a stool, a chair, a sink, and a functioning headwall with an otoscope and ophthalmoscope. Participants were provided with a stethoscope in the event they did not bring their own. Complete supply lists are included in the appendices (Appendices H for diabetes and M for angina).

Staffing requirements: We scheduled two team members, in addition to the designated SPs portraying patient roles, to support each session. The first team member was responsible for greeting participants and ensuring that they were oriented to the simulation and think-aloud activities and completed all the steps of the session. The second team member was responsible for coordinating the SPs and simulation operations (e.g., giving door report, keeping time), managing the video recording, and sitting with participants while they engaged in the think-aloud (Appendix C). Both team members were trained to conduct the think-aloud protocol and were research associates, rather than physicians.

Video recording and video playback during think-alouds: To support the replaying of participant videos during the think-alouds, we video-recorded each scenario using two video cameras fitted with removable SD cards. In this way, one camera could act as a backup in case the primary camera failed.

Think-alouds: Following each scenario, while the participant was completing the PEF, a study team member removed the data card from the camera and inserted it into a designated computer for replaying. This same team member then read the instructions and sat with the participant during the think-aloud process. The study team member was instructed to not ask questions and to limit verbal interactions to comments such as “Uh-huh” or “Hmm” to minimize disruptions. In the event participants stopped thinking aloud for more than 15-60 seconds, the study team member gently nudged the participant by saying, “Think aloud.” Appendix C contains detailed warm-up and implementation instructions.

Participant procedures: On the scheduled scenario day, physician participants were oriented to the simulation rooms and the workflow of the day (Appendices A & B). They were then oriented to the think-aloud procedures they would use following the scenario (Appendix C). Instructions and practice think-aloud exercises were scripted for consistency and were implemented by study team members.

Next, participants were (1) provided with the door information (Appendix D) for the first scenario and advised to enter when ready; (2) allotted up to 15 minutes to complete their initial assessment, physical exam, and postassessment discussion with the SP (there was no penalty for finishing early or being stopped before completion, and depending upon time constraints, some participants were allowed to go a couple of minutes beyond the allotted 15); and (3) advised that the scenario would run in actual time (i.e., not sped up). Following the scenario performance, participants were guided to the designated debriefing room where they (4) completed the PEF (Appendix P), (5) completed the cognitive load question (Appendix Q), (6) reviewed the instructions for thinking aloud (Appendix C), and (7) rewatched their own video-recorded performance while thinking aloud (which was audio recorded using a digital audio recorder).

Following the first scenario, participants followed steps 1-7 above for the second scenario. Participants’ total time to complete these two scenarios, the related PEFs and think-alouds, and the other informational questionnaires was approximately 2 hours.

Optional feedback: Because these scenarios and reflection protocols were initially used to support researching clinical reasoning processes, we did not schedule time for immediate feedback. However, we recognized that participation in the scenarios could still be treated as learning experiences. Thus, following participation, we offered to schedule time for participants to receive feedback from an attending physician on the study. These sessions were scheduled on an ad hoc basis.

Casting, Training, and Quality Improvement
SP casting and training: We sought SPs similar to our designed role in age and body habitus (e.g., diabetes actress was moderately overweight). SPs were provided with the patient case (Appendices E for diabetes and J for angina) and then rehearsed with an SP trainer as needed, drawing from a rehearsal guide (Appendices F for diabetes and K for angina). The use of a rehearsal guide was intended to support implementation fidelity because we occasionally had large breaks in time between study participants. SPs were instructed to provide information if prompted and to minimize volunteering.

Quality improvement of SP performance: We developed and conducted a review of all SP portrayals to examine how consistently they implemented their roles (see Appendices N for diabetes and O for angina). This, in turn, supported ongoing SP training needs and guided decisions about which performances were of high enough quality for analysis. For example, if an SP’s performances were inconsistent with the case as written, we posited that clinical reasoning processes could be skewed. After implementation reviews, findings were shared with the SPs to improve future performance. Findings also supported ongoing scenario improvements (e.g., modifying a scripted SP response or gestural cue).

Results

Participants in this sample were 12 internal medicine, family medicine, and surgery physicians; six were female, and six were male. Eight were resident physicians (five from postgraduate year 1 [PGY 1], three from PGY 3) and four attending. Age and gender of participants are given in Table 1.

Table 1. Participant Demographics (N = 12)
Training Level and Age
Gender
Intern (PGY 1)
 32
Female
 28
Male
 42
Female
 27
Female
 27
Male
Resident (PGY 3)
 30
Female
 29
Male
 29
Female
Attending
 55
Male
 60
Male
 38
Female
 49
Male

Use of Scenario Time
For the diabetes scenario, participants’ time ranged from 7:06 to 19:10 minutes (M = 14:38 minutes). In the stable angina scenario, participants’ time ranged from 11:10 to 17:15 minutes (M = 14:19 minutes). Two participants ran out of time; their scenarios were stopped by the study team at between 17 and 19 minutes to protect participants’ schedules and ensure completion of the PEF and think-aloud.

Differential Diagnoses and Supporting Data Listed by Participants
Diabetes: Participants considered a total of 17 independent differential diagnoses as measured by the PEF (Table 2). The most common differentials included diabetes (n = 12), hypothyroidism (n = 9), diabetes insipidus (n = 5), and urinary tract infection (n = 5). These appeared to differ by PGY status: Interns considered 10 independent differential diagnoses, residents considered five, and attendings (those having completed their initial residency) listed 12. The number of differential diagnoses listed by each participant ranged from three to six (M = 4.0). These also differed by PGY status (due to the small size of the sample, neither this nor any of the distinctions below is statistically significant): Interns listed between three and six differentials (M = 4.0), residents (PGY 3) listed between three and four differentials (M = 3.3), and attendings listed between three and six differentials (M = 4.2). This range suggests that despite the straightforwardness of the case in terms of leading diagnosis (all participants correctly listed diabetes as their leading diagnosis), there was adequate ambiguity to create other possibilities.

Table 2. Most Common Differential Diagnoses Considered for the Diabetes Scenario
Differential Diagnosisa
Frequency of Listed Differential Diagnoses
Interns (n = 5)
Residents (n = 3)
Attendings (n = 4)
Type 2 diabetes
5
3
4
Hypothyroidism
5
2
2
Diabetes insipidus
1
3
1
Urinary tract infection
1
2
2
Hypercalcemia
2
0
0
Psychogenic polydipsia
1
0
1
Syndrome of inappropriate antidiuretic hormone 2 0 0
Yeast infection 0 1 1
aAdditional listed diagnoses that received a single mention included anemia, bladder
incontinence, glomerulonephritis, multiple endocrine neoplasia, nephrotic syndrome,
nonspecific autoimmune, nonspecific endocrine, potomania, and sleep apnea.

Content analysis of the PEF revealed that the most common supporting data participants listed included polydipsia (n = 10), polyuria (n = 9), fatigue (n = 9), polyphagia (n = 7), recurrent yeast infections (n = 7), vision changes (n = 6), and obesity (n = 4). Participants also listed items related to past medical and family history. Among the most common were hypertension and hypothyroid (n = 3) and smoking history and prior parathyroid surgery (n = 2). These differed by PGY status: Interns listed between four and six items of supporting data (M = 4.8), residents listed between five and nine (M = 7.3), and attendings listed between four and 10 (M = 7.5).

Angina: The most common leading diagnoses were angina (n = 5), stable angina (n = 4), coronary artery disease (n = 2), and acute coronary syndrome (n = 1). We considered unstable angina as the correct leading diagnosis, as it was the most specific, but offered near full credit for angina, angina pectoris, and stable angina.

Participants considered a total of 25 independent differential diagnoses (Table 3). The most common differentials included cardiac causes, such as coronary artery disease/acute coronary syndrome/unstable angina/stable angina (n = 17), followed by gastroesophageal reflux disease (GERD; n = 9), musculoskeletal/costochondritis (n = 4), pulmonary embolism (n = 4), and peptic ulcer disease (n = 3). Notably, GERD was the most commonly mentioned diagnostic suggestion by SPs in the scenario. When taking into consideration PGY status, interns listed between three and nine differentials (M = 4.4), residents between three and five (M = 3.6), and attendings between three and six (M = 4.8). These also appeared to differ by PGY status: Interns considered 15 independent differential diagnoses, residents 10, and attendings 13.

Table 3. Most Common Differential Diagnoses Considered for the Angina Scenario
Differential Diagnosisa
Frequency of Listed Differential Diagnoses
Interns (n = 5)
Residents (n = 3)
Attendings (n = 4)
Cardiac causesb
8
4
5
Gastroesophageal reflux disease
3
2
4
Costochondritis/ musculoskeletal pain
2
1
1
Pulmonary embolism
0
2
2
Peptic ulcer disease
1
0
2
Congestive heart failure
2
0
0
aAdditional listed diagnoses that received a single mention included anxiety, aortic dissection, arrhythmia, asthma,
chronic cholelithiasis, chronic obstructive pulmonary disease, deep vein thrombosis, enteritis, esophageal motility
disorder, gastritis, myocardial infarction, noncardiac chest pain, pancreatitis, Prinzmetal angina, and structural
heart disease.
bSuch as coronary artery disease, acute coronary syndrome, angina/angina pectoris, stable angina, and unstable
angina. Frequency counts exceed 12 because some participants listed more than one cardiac diagnosis.

The most common supporting data participants listed on the PEF included chest pain (n = 12), which seven participants further qualified regarding onset with exertion; shortness of breath/dyspnea (n = 10), which six participants further qualified as also occurring with exertion; history of hypertension (n = 8); diabetes (n = 7); smoking (n = 7); GERD (n = 5); and family history of cardiac disease (n = 3). When broken out by PGY status, interns listed between two and eight items of supporting data (M = 4.8), as did residents (M = 5.0), and attendings listed between two and 13 (M = 6.8).

Management Considerations
We also examined participant PEFs for reasoning related to patient management.

Diabetes: Each suggested management, treatment, or testing option was individually scored by physician experts as correct, partially correct, or incorrect, resulting in a percentage of correct suggestions for each participant. For the diabetes case, attendings scored slightly better on the management item (M = 67.6%) compared to interns and residents (M = 56.9% for both). The most frequent lab tests requested included blood glucose (n = 5), glycated hemoglobin (A1C; n = 5), thyroid levels/panel (n = 5), complete metabolic panel (CMP; n = 4), urinalysis (n = 4), urine culture (n = 4), and complete blood count (CBC; n = 3). Other labs that participants listed included urine glucose, ECG, potassium hydroxide, arterial blood gas, insulin antibodies, cholesterol panel, urine sodium, and blood sodium. Three participants indicated that they would request labs; however, they did not distinguish any specific tests.

In addition to obtaining labs, nine of 12 participants provided additional management choices that included pharmacological management (e.g., use of antihyperglycemics like Metformin, an insulin trial), lifestyle management (e.g., nutrition, exercise), and referrals to other specialists (e.g., diabetes nurse educator, ophthalmologist).

Angina: For the angina case, management scores were similar, with the three residents scoring most highly (M = 81.9%), followed by interns (M = 77.8%) and attendings (M = 76.3%). The most frequent diagnostic test requested by participants was obtaining a stress test (n = 10), followed by obtaining an ECG (n = 9). Two participants considered requesting a chest X-ray. Participants also considered obtaining additional laboratory testing, such as CBC (n = 2), CMP (n = 2), and cardiac enzymes (n = 2). Other labs mentioned included lipid profile, A1C, and urine glucose. Three participants indicated that they would request labs; however, they did not provide further details.

In addition to testing, four participants considered pharmaceutical management, the most common medications being a statin (n = 3), nitrates (n = 3), and aspirin (n = 3). Other medications listed included an ACE inhibitor, beta blockers, and adjustments to the patient’s current medications (i.e., hydrochlorothiazide, lisinopril). Four participants discussed whether to admit the patient or manage him in the outpatient setting, and two indicated a cardiac catheterization might be necessary. These participants also prioritized administration of medications and stress testing using qualifiers, including “expedite,” “ASAP,” and “right away.”

Think-Aloud Reflections
To explore participants’ reasoning processes, we coded the think-alouds for reconsiderations (indications that a participant would have done something differently, either in the scenario itself or on the PEF) and tentativeness (words like possibly, try, seem, and if that tended to indicate uncertainty). The former were hand coded, and the latter were automatically coded by Linguistic Inquiry and Word Count (a text analysis program).40 Although detailed qualitative analysis is currently underway, we believe this initial pass offers some evidence that participants were actively reasoning by rethinking their decisions and hedging their beliefs.

Diabetes: As with the PEF analysis above, we noted differences among the groups. Participants reconsidered their actions in the diabetes case between zero and 14 times (M = 3.5). While significance testing was not possible (here or in any of these analyses), we noted that interns (M = 6.4) reconsidered actions more than residents (M = 2.7) who, in turn, reconsidered more than attendings (M = 0.5). Meanwhile, all participants used tentativeness markers (measured as a percentage out of the total word count), ranging from 4.5% of total words in a case to 10.1%. Interns (M = 7.1%) and residents (M = 7.5%) were more tentative in their diabetes think-alouds than attendings (M = 5.4%). Thus, while most participants reconsidered actions and were tentative in their phrasing to some degree in the diabetes case, attendings reconsidered less and were less tentative.

Angina: Participants reconsidered actions in the angina case between zero and 11 times (M = 3.3). Interns (M = 4.8) and residents (M = 4.3) reconsidered actions more than attendings (M = 0.5). Tentativeness markers ranged from 4.2% of total words in a case to 8.9%. For the angina case, residents (M = 7.0%) were slightly more tentative than interns (M = 5.5%) and attendings (M = 5.6%). Thus, interns and residents reconsidered more actions than attendings, but residents were more tentative than either interns or attendings (again, with no statistical significance).

Cognitive Load
Diabetes: Participants’ self-reported cognitive load for completing PEFs for this scenario ranged from 4 to 8 on a scale of 1-10 (M = 6.2; see Table 4). While the sample was too small for significance testing, we noted that PGY 1 interns found this scenario to be less complex (M = 5.8) than attendings (M = 6.8). PGY 3 residents rated it between those groups (M = 6.0).

Table 4. Self-Reported Cognitive Load by Level of Expertise (N = 12)
Scenario/Level of Expertise
Minimum
Maximum
M
Diabetes
 PGY 1 interns (n = 5)
4
8
5.8
 PGY 2-4 residents (n =3)
5
7
6.0
 Attendings (n = 4)
5
8
6.8
  Total
4
8
6.2
Angina
 PGY 1 interns (n = 5)
5
7
6.6
 PGY 2-4 residents (n =3)
6
7
6.7
 Attendings (n = 4)
5
8
6.5
  Total
5
8
6.6

Angina: Participants rated the cognitive load of this scenario slightly higher than diabetes (M = 6.6; see Table 4), but not significantly so. Interns, residents, and attendings rated it relatively similarly (Ms = 6.6, 6.7, and 6.5, respectively).

Participant Ratings of Scenario Authenticity
Participants rated both the diabetes and angina cases as being highly authentic, with a mean of 4.8 for diabetes and 4.6 for angina (both on a scale of 1-5). Although there was not enough power to test statistically, we noted that attendings rated the authenticity equal to or higher than interns or residents (see Table 5).

Table 5. Reported Scenario Authenticity by Level of Expertise (N = 12)
Scenario/Level of Expertise
Minimum
Maximum
M
Diabetes
 PGY 1 interns (n = 5)
4
5
4.8
 PGY 2-4 residents (n =3)
4
5
4.7
 Attendings (n = 4)
4
5
4.8
  Total
4
5
4.8
Angina
 PGY 1 interns (n = 5)
4
5
4.6
 PGY 2-4 residents (n =3)
4
5
4.3
 Attendings (n = 4)
4
5
4.8
  Total
4
5
4.6

Discussion

We have described the development and implementation of two scenarios used to formatively assess the clinical reasoning of physicians with a range of experience (i.e., interns, residents, and attendings). Findings from the implementation evaluation suggest that our strategies of including diagnostic ambiguity and contextual factors (i.e., diagnostic suggestions by the SP) may have increased complexity, possibly influencing physicians to consider a diverse range of differential diagnoses. Moreover, participants’ reconsiderations, tentative language, moderate cognitive load ratings, and high authenticity ratings indicate that the design was challenging and engaging enough for interns through attendings. Of interest, we note that while most participants selected the correct leading diagnosis, reported management choices displayed greater diversity.

These scenarios place a priority on examining and practicing clinical reasoning behaviors. This approach allows participants and instructors to focus not only on the outcome or solution to a diagnostic problem but equally on the nuanced and iterative meaning-making process leading to that solution.4,41,42 Moreover, the inclusion of planned contextual factors provides opportunities to practice and reflect on the ways the meaning-making process can shift across contexts.10 For instance, the content analysis of the angina scenario PEF suggests that participants may have given added weight to GERD (the most frequent diagnostic suggestion) as a differential, and many participants reflected on this contextual factor in their think-alouds afterward.

Opportunities for reflection were further supported by the think-alouds. Our brief analysis of these reflections indicates that the scenarios were complex enough for most participants, particularly newer clinicians, to reflect on possible changes to their practice through reconsiderations. Moreover, all participants used some tentativeness markers, which have been argued to indicate that an event has not been fully processed.40 Thus, even when physicians reach a diagnosis and treatment plan, our preliminary results suggest that these cases may be complex enough to warrant some further processing.

Clinical reasoning likely differs according to level of expertise,43 as suggested by attendings’ lower use of tentative language and reconsiderations compared to interns and residents. Nonetheless, the cognitive load and scenario authenticity findings reported here further support the idea that these scenarios can provide interns, residents, and attendings with a sufficiently challenging situation in which to engage. For example, two attendings had a relatively high cognitive load and the highest authenticity ratings when compared to residents, suggesting that the scenarios can be used across expertise levels. This approach potentially provides an alternative for those working to support the lifelong development and improvement of clinical reasoning in physicians of multiple levels of training.6,44

Reflections on Development
Scenario-based simulation design is a complex task wherein designers attempt to plan many of the possible pathways scenario participants may take. In our experience, incorporating diverse stakeholders’ unique perspectives resulted in robust scenarios and being better prepared for addressing any unusual choices participants made.

For others considering participatory design approaches, we recommend that one individual be responsible for leading and coordinating the design effort, scheduling outreach to the different subject matter experts (SMEs), and supporting the occasional need to resolve conflicting team perspectives. While coordination among multiple SMEs during the extended design and testing phases was sometimes time consuming, the process resulted in scenarios that required minimal revision during the implementation phase. This subsequently resulted in all 24 scenarios (12 diabetes and 12 angina) that we ran being of sufficient quality for inclusion in our larger study. Given the cost and scheduling constraints associated with scenarios, this added planning time seems worthwhile, minimizing the need to overrecruit study participants and preventing the disappointing loss of staff and laboratory time, funds, and participant data, time, and effort.

Incorporating diagnostic ambiguity proved to be a challenging task throughout all design phases. For example, writing detailed past, family, and medical histories made it more difficult to predict which aspects participants might attend to. However, the participatory design approach made this process easier: Our clinically oriented SMEs reviewed the SP cases multiple times to explore potential participant actions. Additionally, during the pilot phase, the SPs and SP educator highlighted the difficulty SPs might have in preparing to implement these scenarios. This helped us enhance our training and retraining strategies to include SP think-alouds and the development of a rehearsal guide (Appendices F for diabetes and K for angina). During implementation, the team noted the importance of tracking the variety of questions participants asked the SPs. This observation resulted in the development of the SP implementation checklist for each case (Appendices N for diabetes and O for angina). Subsequently, these became an important part of our process for determining scenario implementation quality.

Through this careful design process, we were able to more consistently implement scenarios while still allowing for participant flexibility in the face of the ambiguity and contextual factors, resulting in the consideration of a variety of diagnoses and management strategies, as well as opportunities to reconsider these decisions.

Limitations
First, due to difficulty with recruiting participants for research, the sample size was small, only 12, making generalizing results beyond this group difficult. Also, it was challenging to design and refine these scenarios. Although our inclusion of multiple SMEs resulted in robust scenarios, taking an explicit participatory design approach was logistically challenging. For example, scheduling and coordinating meetings with SMEs required patience, and there were occasional disagreements among SMEs about which aspects of the case were relevant and should be included. The lead instructional designer sought resolution through careful discussion. Additionally, during the implementation phase, we noted that a more complex scenario required more training and retraining of our SPs than initially expected. We addressed this by training SPs in pairs and providing detailed feedback using the implementation checklists (Appendices N & O). However, it should be noted that these scenarios were part of a research program, so some of these processes might be more rigorous than needed for other uses of the scenarios.

Lastly, the use of think-alouds as a reflection strategy, as opposed to relying on brief faculty feedback, may be challenging for programs with time and space constraints because an individual think-aloud requires the same amount of time as the participant’s scenario and, ideally, a private room in which to complete the protocol uninterrupted. This, in fact, is one of the reasons we curtailed scenario times to approximately 15 minutes. Also, proper implementation of think-alouds requires those sitting with the participant to be patient and wait until thinking aloud is complete. Most team members indicated early on that this was difficult because they often thought of questions for the participant as they listened. Yet they reported that it became easier with practice and was a valuable way to allow the participant space to reflect.

Future Directions
Developing these scenarios highlighted the need to further examine the benefits of using scenario-based simulations for evaluating and teaching clinical reasoning specifically focused on management choices. For example, the broad variation in the management choices participants considered and the effect of acuity (e.g., uncertainty about admitting or treating the angina patient in the outpatient setting) and resource availability on those plans suggest that these kinds of scenarios could be important tools.45

Additionally, since reflection is considered a vital component of simulation, the integration of open-ended PEFs and think-alouds could be used as a complementary reflection experience for simulation stakeholders that does not require recruiting large numbers of clinical faculty. Instead, this suite of resources is administered by trained research associates and simulation educators seeking to elicit what participants are thinking as they engage. When used in conjunction with other simulation-based experiences relying on the support of clinical faculty or trained debrief facilitators or SPs, these simulations may offer learners a broader set of reflection experiences. Further research could be done to examine this combination of strategies.

Lastly, our strategy of integrating ambiguity did help create scenarios that were well received by diverse participants; however, because the process presented some challenges, developing systematic guidelines or a tool kit might be helpful to other simulation-based instructional designers.


Author Information

  • Alexis Battista, PhD: Assistant Professor, Graduate Programs in Health Professions Education, Uniformed Services University of the Health Sciences
  • Abigail Konopasky, PhD: Assistant Professor, Graduate Programs in Health Professions Education, Uniformed Services University of the Health Sciences
  • Divya Ramani, MS: Research Associate, Graduate Programs in Health Professions Education, Uniformed Services University of the Health Sciences
  • Megan Ohmer: : Research Associate, Graduate Programs in Health Professions Education, Uniformed Services University of the Health Sciences
  • Jeffrey Mikita, MD: Chief, Department of Simulation, Walter Reed National Military Medical Center
  • Anna Howle, MAC: Simulation Educator, Department of Medical Simulation, Walter Reed National Military Medical Center
  • Sarah Krajnik, RN: Nurse Educator, Department of Simulation, Walter Reed National Military Medical Center
  • Dario Torre, MD, PhD: Associate Director, Graduate Programs in Health Professions Education, Uniformed Services University of the Health Sciences
  • Steven J. Durning, MD, PhD: Director, Graduate Programs in Health Professions Education, Uniformed Services University of the Health Sciences

Acknowledgments
We gratefully acknowledge the support and guidance from Drs. Lou Pangaro, Paul Hemmer, Elexis McBee, Jeff LaRochelle, and Luke Surry while refining these scenarios prior to their implementation. We also want to thank our standardized participants, Claude Stark, Chris Morrow, and Lindsay Williams, and the Walter Reed Simulation Center for their advice and feedback throughout development and implementation. We also thank the reviewers who provided thoughtful and invaluable feedback on the manuscript and accompanying appendices.

Disclosures
None to report.

Funding/Support
Dr. Battista reports grants from JPC-1, CDMRP-Congressionally Directed Medical Research Programs, during the conduct of the study.

Informed Consent
All identifiable persons in this resource have granted their permission.

Ethical Approval
The Uniformed Services University of the Health Sciences Office of Human Subject Protection institutional review board approved this study.

Disclaimer
The opinions and assertions expressed herein are those of the authors and do not necessarily reflect the official policy or position of the Uniformed Services University of the Health Sciences or the Department of Defense.


References

  1. Croskerry P. A universal model of diagnostic reasoning. Acad Med. 2009;84(8):1022-1028. https://doi.org/10.1097/ACM.0b013e3181ace703
  2. Heneghan C, Glasziou P, Thompson M, et al. Diagnostic strategies used in primary care. BMJ. 2009;338(7701):b946. https://doi.org/10.1136/bmj.b946
  3. Elstein AS, Shulman LS, Sprafka SA. Medical Problem Solving: An Analysis of Clinical Reasoning. Cambridge, MA: Harvard University Press; 1978.
  4. Juma S, Goldszmidt M. What physicians reason about during admission case review. Adv Health Sci Educ Theory Pract. 2017;22(3):691-711. https://doi.org/10.1007/s10459-016-9701-x
  5. Young M, Thomas A, Lubarsky S, et al. Drawing boundaries: the difficulty in defining clinical reasoning. Acad Med. 2018;93(7):990-995. https://doi.org/10.1097/ACM.0000000000002142
  6. Balogh EP, Miller BT, Ball JR, eds. Improving Diagnosis in Health Care. Washington, DC: National Academies Press; 2015.
  7. Norman GR, Eva KW. Diagnostic error and clinical reasoning. Med Educ. 2010;44(1):94-100. https://doi.org/10.1111/j.1365-2923.2009.03507.x
  8. Surry LT, Torre D, Durning SJ. Exploring examinee behaviours as validity evidence for multiple-choice question examinations. Med Educ. 2017;51(10):1075-1085. https://doi.org/10.1111/medu.13367
  9. Power A, Lemay J-F, Cooke S. Justify your answer: the role of written think aloud in script concordance testing. Teach Learn Med. 2017;29(1):59-67. https://doi.org/10.1080/10401334.2016.1217778
  10. Durning SJ, Artino AR, Boulet JR, Dorrance K, van der Vleuten C, Schuwirth L. The impact of selected contextual factors on experts’ clinical reasoning performance (does context impact clinical reasoning performance in experts?). Adv Health Sci Educ Theory Pract. 2012;17(1):65-79. https://doi.org/10.1007/s10459-011-9294-3
  11. Forsberg E, Ziegert K, Hult H, Fors U. Clinical reasoning in nursing, a think-aloud study using virtual patients—a base for an innovative assessment. Nurse Educ Today. 2014;34(4):538-542. https://doi.org/10.1016/j.nedt.2013.07.010
  12. Bucknall TK, Forbes H, Phillips NM, et al; FIRST2ACT Investigators. An analysis of nursing students’ decision-making in teams during simulations of acute patient deterioration. J Adv Nurs. 2016;72(10):2482-2494. https://doi.org/10.1111/jan.13009
  13. Prakash S, Bihari S, Need P, Sprick C, Schuwirth L. Immersive high fidelity simulation of critically ill patients to study cognitive errors: a pilot study. BMC Med Educ. 2017;17:36. https://doi.org/10.1186/s12909-017-0871-x
  14. Tschan F, Semmer NK, Gurtner A, et al. Explicit reasoning, confirmation bias, and illusory transactive memory: a simulation study of group medical decision making. Small Group Res. 2009;40(3):271-300. https://doi.org/10.1177/1046496409332928
  15. Burbach B, Barnason S, Thompson SA. Using “think aloud” to capture clinical reasoning during patient simulation. Int J Nurs Educ Scholarsh. 2015;12(1):1-7. https://doi.org/10.1515/ijnes-2014-0044
  16. Durning SJ, Artino AR Jr, Beckman TJ, et al. Does the think-aloud protocol reflect thinking? Exploring functional neuroimaging differences with thinking (answering multiple choice questions) versus thinking aloud. Med Teach. 2013;35(9):720-726. https://doi.org/10.3109/0142159X.2013.801938
  17. Ericsson KA, Simon HA. How to study thinking in everyday life: contrasting think-aloud protocols with descriptions and explanations of thinking. Mind Cult Act. 1998;5(3):178-186. https://doi.org/10.1207/s15327884mca0503_3
  18. Alessi SM. Fidelity in the design of instructional simulations. J Comput Based Instr. 1988;15(2):40-47.
  19. Dieckmann P, Gaba D, Rall M. Deepening the theoretical foundations of patient simulation as social practice. Simul Healthc. 2007;2(3):183-193. https://doi.org/10.1097/SIH.0b013e3180f637f5
  20. Kneebone RL, Scott W, Darzi A, Horrocks M. Simulation and clinical practice: strengthening the relationship. Med Educ. 2004;38(10):1095-1102. https://doi.org/10.1111/j.1365-2929.2004.01959.x
  21. Battista A. An activity theory perspective of how scenario-based simulations support learning: a descriptive analysis. Adv Simul (Lond). 2017;2:23. https://doi.org/10.1186/s41077-017-0055-0
  22. Durning SJ, Artino AR Jr, Pangaro LN, van der Vleuten C, Schuwirth L. Perspective: redefining context in the clinical encounter: implications for research and training in medical education. Acad Med. 2010;85(5):894-901. https://doi.org/10.1097/ACM.0b013e3181d7427c
  23. Durning SJ, Artino A, Boulet J, et al. The feasibility, reliability, and validity of a post-encounter form for evaluating clinical reasoning. Med Teach. 2012;34(1):30-37. https://doi.org/10.3109/0142159X.2011.590557
  24. Weinstein A, Pinto-Powell R. Introductory clinical reasoning curriculum. MedEdPORTAL. 2016;12:10370. https://doi.org/10.15766/mep_2374-8265.10370
  25. Felix T, Richard D, Faber F, Zimmermann J, Adams N. Coffee Talks: an innovative approach to teaching clinical reasoning and information mastery to medical students. MedEdPORTAL. 2015;11:10004. https://doi.org/10.15766/mep_2374-8265.10004
  26. Moroz A. Clinical reasoning workshop: cervical spine and shoulder disorders. MedEdPORTAL. 2017;13:10560. https://doi.org/10.15766/mep_2374-8265.10560
  27. Daniel M, Carney M, Khandelwal S, et al. Cognitive debiasing strategies: a faculty development workshop for clinical teachers in emergency medicine. MedEdPORTAL. 2017;13:10646. https://doi.org/10.15766/mep_2374-8265.10646
  28. Levin M, Cennimo D, Chen S, Lamba S. Teaching clinical reasoning to medical students: a case-based illness script worksheet approach. MedEdPORTAL. 2016;12:10445. https://doi.org/10.15766/mep_2374-8265.10445
  29. Weinstein A, Gupta S, Pinto-Powell R, et al. Diagnosing and remediating clinical reasoning difficulties: a faculty development workshop. MedEdPORTAL. 2017;13:10650. https://doi.org/10.15766/mep_2374-8265.10650
  30. Beaver B, Wittler M. Toxic ingestion/acute tricyclic antidepressant (TCA) ingestion. MedEdPORTAL. 2015;11:10227. https://doi.org/10.15766/mep_2374-8265.10227
  31. Metzner J, Lombaard S, Au A, Kim S. Venous air embolism curriculum. MedEdPORTAL. 2008;4:807. https://doi.org/10.15766/mep_2374-8265.807
  32. Donegan D, Mader R, Weigel S, Kennel KA. Newly diagnosed type 1 diabetes mellitus: a resident simulation. MedEdPORTAL. 2013;9:9345. https://doi.org/10.15766/mep_2374-8265.9345
  33. Glick S, Buchanan D, Rohr L, Kehoe L. Homeless health care simulated patient case. MedEdPORTAL. 2007;3:759. https://doi.org/10.15766/mep_2374-8265.759
  34. Heitz C, Burton JH, Fortuna TJ, Kuehl DR, Perkins JC, Prusakowski MK. The undifferentiated chest pain patient—an introduction to the ED approach to the patient. MedEdPORTAL. 2013;9:9482. https://doi.org/10.15766/mep_2374-8265.9482
  35. Könings KD, Brand-Gruwel S, van Merriënboer JJ. Towards more powerful learning environments through combining the perspectives of designers, teachers, and students. Br J Educ Psychol. 2005;75(4):645-660. https://doi.org/10.1348/000709905X43616
  36. Jonassen DH, Rohrer-Murphy L. Activity theory as a framework for designing constructivist learning environments. Educ Technol Res Dev. 1999;47(1):61-79. https://doi.org/10.1007/BF02299477
  37. Jonassen DH. Revisiting activity theory as a framework for designing student-centered learning environments. In: Jonassen DH, Land SM, eds. Theoretical Foundations of Learning Environments. Mahwah, NJ: Lawrence Erlbaum Associates; 2000:89-122.
  38. Boren T, Ramey J. Thinking aloud: reconciling theory and practice. IEEE Trans Prof Commun. 2000;43(3):261-278. https://doi.org/10.1109/47.867942
  39. Brünken R, Seufert T, Paas F. Measuring cognitive load. In: Plass JL, Moreno R, Brünken R, eds. Cognitive Load Theory. New York, NY: Cambridge University Press; 2010:181-202.
  40. Tausczik YR, Pennebaker JW. The psychological meaning of words: LIWC and computerized text analysis methods. J Lang Soc Psychol. 2010;29(1):24-54. https://doi.org/10.1177/0261927X09351676
  41. Ilgen JS, Eva KW, Regehr G. What’s in a label? Is diagnosis the start or the end of clinical reasoning? J Gen Intern Med. 2016;31(4):435-437. https://doi.org/10.1007/s11606-016-3592-7
  42. ten Cate O, Durning SJ. Understanding clinical reasoning from multiple perspectives: a conceptual and theoretical overview. In: ten Cate O, Custers EJFM, Durning SJ, eds. Principles and Practice of Case-Based Clinical Reasoning Education: A Method for Preclinical Students. Berlin, Germany: Springer Open; 2018:35-46.
  43. Schmidt HG, Mamede S. How to improve the teaching of clinical reasoning: a narrative review and a proposal. Med Educ. 2015;49(10):961-973. https://doi.org/10.1111/medu.12775
  44. Nestel D, Kelly M. Strategies for research in healthcare simulation. In: Nestel D, Kelly M, Jolly B, Watson M, eds. Healthcare Simulation Education: Evidence, Theory and Practice. Hoboken, NJ: John Wiley & Sons; 2018:37-44.
  45. Cook DA, Sherbino J, Durning SJ. Management reasoning: beyond the diagnosis. JAMA. 2018; 319(22):2267-2268. https://doi.org/10.1001/jama.2018.4385


Citation

Battista A, Konopasky A, Ramani D, et al. Clinical reasoning in the primary care setting: two scenario-based simulations for residents and attendings. MedEdPORTAL. 2018;14:10773. https://doi.org/10.15766/mep_2374-8265.10773

Received: May 10, 2018

Accepted: October 12, 2018