Partnering With Interpreter Services: Standardized Patient Cases to Improve Communication With Limited English Proficiency Patients

Introduction As the number of patients with limited English proficiency grows, there is increasing awareness in the medical community about disparities in health outcomes for this population. The proper use of professional medical interpreters improves communication between physicians and patients with limited English proficiency. Typically, however, little curricular time in medical training is devoted to this competency. Methods We developed a two-station objective structured clinical examination (OSCE) in which learners worked with interpreters to conduct medical interviews with Spanish-speaking standardized patients (SPs). Cases were designed for use with residents from any medical specialty and to have personal and emotional richness in keeping with the real-life circumstances of many patients. Twelve residents from six medical specialties completed a session evaluation and were assessed by faculty, SPs, and interpreters using existing validated instruments and case-specific checklists. Results All residents reported that the cases mimicked real patient encounters. The checklists were well received and easy to use. While scores varied between residents, deficiencies were identified in basic communication skills for interacting with a non-English-speaking SP through an interpreter, including maintaining proper eye contact and open body posture with patients and introducing and clearly articulating the role of the interpreter. Discussion A two-station OSCE utilizing professional medical interpreters and Spanish-speaking SPs was well received by all participants. Residents’ scores helped identify common skill gaps in their work with interpreters. Based on the success of the pilot deployment, we plan to target educational interventions at these common deficiencies and offer the OSCE to more trainees.

in the health care setting, and recognition of the importance of culturally competent care, there is typically little curricular time in medical training devoted to the effective use of interpreters.
Standardized patients (SPs) are routinely utilized in medical education to present learners with opportunities to develop, practice, and demonstrate skills in a contrived setting that simulates the real clinical environment. Objective structured clinical examinations (OSCEs) utilizing SPs allow for assessment of learners at high levels on Miller's pyramid; learners can show and do rather than simply demonstrate knowledge. There are only a few MedEdPORTAL publications of SP cases designed for use by Spanishspeaking actors, and the proportion compared to cases for English-speaking actors does not reflect the rising share of the population occupied by Spanish-speaking individuals. These existing publications focus on verification of fluency in undergraduate medical learners (without certified medical interpreter involvement) or illustrate the difference between using an in-person or telephone interpreter, compared with a bilingual family member. Existing resources are not designed to assess residents' abilities to use a certified medical interpreter or to illustrate best practices for doing so through evaluation on multiple metrics. The incorporation of interpreters into the SP encounter allows for a more accurate simulation of a real patient visit, and improving residents' skills with interpreters is an important step in global improvement of physician-patient communication.
To address the need for incorporation of interpreters into SP encounters with residents, we developed two cases (Appendices A and B) targeted at postgraduate resident learners, in which Spanish native-speaking actors served as SPs and certified medical interpreters from our academic medical center served in their typical capacity as interpreters. The cases were initially designed as an assessment mechanism as part of our yearlong, resident-led Spanish language curriculum, which is described below. The purpose of the current assessment was to demonstrate feasibility of the two cases on a small scale with residents from multiple specialties, irrespective of their participation status in the Spanish language curriculum.

Curricular Context
We designed two cases for an OSCE as part of the broader Yale Resident Spanish Initiative (YRSI). The YRSI is a yearlong, self-directed Spanish language curriculum for residents designed to improve language skills and understanding of health-related cultural beliefs.
Our residents generally begin the program with some Spanish language skills and desire to grow in their proficiency throughout the year, through our online self-paced modules, supplementary video materials, and language tutoring in person with native Spanish speakers. One unintended outcome of YRSI is that hospital interpreters have observed that residents participating in YRSI appear more comfortable utilizing interpreters during patient encounters than residents who do not participate in YRSI. Therefore, we initially created the OSCE to assess the impact of YRSI on residents' use of medical interpreters by measuring their effectiveness of interpreter use during an SP interview. The OSCE in this pilot project was administered at the start of the curriculum (October 2017) and 6 months later (April 2018). Participants represented a mix of residents enrolled and not enrolled in YRSI. None had received specific training related to use of medical interpreters as part of YRSI prior to the OSCE or during the study period.

Case Development
We conceived of the two cases, which then went through many rounds of revision with the SP program team and eventually with the SPs themselves. Choosing topics sufficiently basic to be addressed by residents from any specialty, we designed the cases to be portrayed by either a man or woman and to have personal and emotional richness in keeping with the real-life circumstances of many patients seen in our emergency department and urgent care clinics. The final cases were patients presenting to an urgent care setting with abdominal pain (Appendix A) and back pain (Appendix B). After case development, the full project was submitted to the institutional review board for approval and granted a status of exempt, allowing us to proceed with SP training and participant recruitment.

Actor and Interpreter Recruitment and Training
We recruited four native Spanish speakers as SPs to most accurately reflect the barrier in physicianpatient communication seen with LEP patients. Actors were offered the standard rate of payment as part of the SP program, though some actors chose to volunteer their time. SPs received their cases in advance and were told to review the script prior to a 2-hour training session with the director of the SP program at our institution.
The training session first focused on the details of the cases, including questions and suggestions brought up by the SPs. SPs provided helpful feedback to add to the authenticity of the cases based on their lived experience, noting nuances in work status with certain types of visas and emotional responses concerning immigration status. We then reviewed session logistics. During the remainder of the training, the SPs were coached to do the following: • Speak only in Spanish during the OSCE and reveal no English comprehension (despite their fluency in English as a second language). • Defer all responses until after the interpreter had completed the interpretation of the physician's statements. • Reveal potentially sensitive information (e.g., immigration status, employment, family life) only if asked or if the physician created an environment in which such disclosures would feel natural and safe. • Use the end-of-session assessment checklists.
We recruited interpreters from the language services department at our hospital and paid them an hourly rate similar to that offered to the SPs. Since they were being asked to interpret as they would with any real patient encounter, we provided no specific training other than a 10-minute overview of logistics prior to the OSCE. The interpreters were encouraged to act just as they would for a typical patient encounter and to stand or sit where they felt most comfortable.
The cost of recruiting, training, and hiring the actors and interpreters was approximately $1,200, covering 2 hours of training for SPs and 2 hours of OSCEs for four SPs and four interpreters, repeated in fall and spring sessions.

Resident Recruitment
We recruited 12 resident physicians from different training programs, postgraduate year levels, and participation statuses in the YRSI. Residents were recruited from internal medicine, pediatrics, general surgery, emergency medicine, urology, and radiation oncology. They were given no specific training related to the OSCE other than a 2-minute overview of logistics immediately before each session and received an $80 honorarium for their participation ($40 for each session).

Session Logistics
The OSCE occurred over two evenings as stand-alone sessions and was run with four of the 12 resident participants at a time. Each evening, residents rotated through the OSCE in three waves, each completing one of the two cases per session. As a result, four rooms were operating simultaneously with a resident, SP, and interpreter in each. Rooms were labeled A, B, C, and D.
We utilized rooms in our university's clinical exam practice suite. The rooms were equipped with audiovisual capability and simulated an outpatient office setting. Before entering the exam room, residents received a file folder labeled A, B, C, or D, corresponding to the room and case for the encounter. The folder contained basic case details and instructions (Appendices C and D). As noted above, prior to the start of each wave, a member of the OSCE team spent 2 minutes explaining logistics to the residents. The important details communicated at this point included the time allotment of 15 minutes for the case and an explanation of the instructions contained in the resident folders. Specifically, residents learned that they would focus their attention on history and counseling during the OSCE. For the physical examination, residents would simply tell the SP that they would like to perform a physical examination, at which point the SP would hand them a checklist containing the relevant findings (Appendices E and F).
After reviewing the case details outside the room, the resident met the interpreter, and they entered the room together. After 15 minutes, residents were informed that the encounter was finished, at which time the interpreters and SPs were given 5 minutes to complete a rating scale (Appendices G and H) and the residents completed a session evaluation (Appendix I).
At the end of the wave, there was a 5-minute window for transition, during which the previous group of residents exited while the next group arrived. On each evening of the OSCE, each SP and interpreter completed their encounter three times with three different residents during a roughly 90-minute workshop (Appendix J). The OSCE was repeated 6 months after the first session, with residents completing the case they had not completed during the first session. One additional staff member was required to keep the sessions on schedule, review the logistics with participants, and file the video footage upon completion of the OSCE.
Learner Assessment Following each encounter, interpreters and SPs completed rating scales to evaluate the resident participant and his/her effectiveness in using interpreters and communicating with an LEP patient. Interpreters utilized the University of California, Irvine, School of Medicine (UCISOM) Interpreter Scale (IS, Appendix G), while SPs utilized the Interpreter Impact Rating Scale (IIRS, Appendix H).
The IS is a 13-question validated instrument designed to measure communication skills as observed by trained interpreters, focusing on objective verbal and observable behaviors and minimizing interpreters' emotional responses to questions. The IIRS is a seven-question validated instrument designed for use by SPs in any language encounter; it emphasizes patient-centeredness and observable behaviors as a patient with emotional engagement in the encounter.
Once both OSCE sessions had been completed, the videos of each encounter were reviewed by a faculty member using the Faculty Observer Rating Scale (FORS, Appendix K), an 11-question validated instrument designed for use by faculty not proficient in the language of the encounter, independent of the clinical case, and reflective of objective observation of communication skill behaviors. The faculty member also utilized case-specific checklists (Appendices L and M) designed by ourselves and SP program staff to measure the number of case elements elicited during each interview. The case-specific checklists were designed to be dense, so as to allow for differentiation in scores between participants, because an evaluation of process without accounting for content would provide an incomplete picture of communication competency in a clinical encounter. Finally, the faculty reviewer made unstructured observations to be provided back to the residents as formative feedback.
We utilized and scored the IS, IIRS, and FORS as described by Lie and colleagues in their validation studies.
These instruments utilized a 5-point Likert scale (1 = marginal/poor, 5 = outstanding) to gauge performance on each response item. All raters (interpreters, SPs, and faculty) reviewed the checklists in advance and had opportunities to ask clarifying questions. For example, because in practice many interpreters introduce themselves, raters were told to give residents credit for having introduced the interpreter even if the interpreter had initiated the introduction. However, if the interpreter had to interrupt the resident because the resident had not introduced the interpreter, then the resident was not given credit. We report mean and standard deviations for the IS, IIRS, and FORS in the Results section, below. 16

OSCE Assessment
Residents completed a postsession evaluation after each OSCE-the Resident Session Evaluation Form (Appendix I). We used this resident evaluation form internally in order to correlate residents' perceived ability with their scores on the case-specific checklists and did not run correlations on the self-assessment of residents' perception of their performance with the actual assessment scores. We conducted unstructured interviews of the interpreters and SPs during and after the first OSCE and held a focus group for interpreters and SPs after the second OSCE. We asked interpreters and SPs to comment on their overall impression of the sessions, logistics, and their interest in future participation.

Results
On the Resident Session Evaluation Form, all 12 resident physicians felt the SP encounters were effective or very effective at simulating a real encounter with an LEP patient, including the inherent challenges with time, cultural differences, and role of an interpreter. On a 5-point Likert scale (1 = not effective, 5 = very effective), residents' mean score of the effectiveness of the OSCE in mimicking a real patient encounter was 4.9, and their mean score of overall experience with the OSCE was 4.8.
During interviews and focus groups, interpreters and SPs, respectively, found the IS and IIRS intuitive and easy to use. All parties felt that the simulated encounter was an important topic for medical trainees, that it was well organized, that the cases were realistic and well written, and that training was adequate. Some participants were surprised that many residents used only a portion of the time available for the encounters, though others noted that the pace of the encounters reflected real life. Multiple SPs and interpreters commented that residents did not address the cultural beliefs and questions of the patients.
All SPs and interpreters were enthusiastic about the prospect of getting involved in other curricular offerings related to working with LEP patients.
The faculty reviewer found the FORS and case-specific checklists easy to use. Suggestions for improving the case-specific checklists emerged based on items that came up in some interviews. For example, some residents appropriately inquired about domestic violence, though this was not an item on the case-specific checklist.
Overall, combined mean scores on the IS, IIRS, and FORS ranged from 4.0 to 4.8 on 5-point Likert scales (1 = marginal/poor, 5 = outstanding; Table 1). Resident performance on the IS and FORS was not significantly different; however, a significant difference was found on the IIRS, with higher scores noted by residents participating in the back pain case. Resident scores on the case-specific checklists varied widely, though all showed room for improvement (Table 1). Item-specific analysis of the UCISOM checklists revealed that the most common competencies where residents struggled were maintaining proper eye contact and open body language with patients and introducing and clearly articulating the role of the interpreter ( Table   2, Table 3, and Table 4). Because this OSCE was developed as a part of the broader YRSI curriculum, we conducted a separate analysis of differences in checklist scores based on YRSI participation status, which we have described elsewhere.     Directly addressed the issues translated that were of concern to me.

Discussion
We designed and implemented two OSCE cases for residents utilizing SPs with LEP relying on an interpreter. The cases were well received by residents, certified medical interpreters, and SPs who reported that the cases accurately depicted real-life patient scenarios, including the personal and emotional contexts of illness. While these cases were designed as part of our yearlong, resident-led Spanish language curriculum, our findings suggest utility outside of the YRSI. Indeed, both YRSI As demonstrated on the assessments by interpreters, SPs, and the faculty observer, resident physicians lacked some basic communication skills for interacting with a non-English-speaking SP through an interpreter. These deficiencies highlight clear areas for learning that could be targeted in medical training. Emphasizing more effective body language and eye contact, educating residents on how to address patients' cultural beliefs, and explaining clearly the role of an interpreter could improve residents' readiness to work with interpreters and, by extension, their effectiveness in communicating with LEP patients during interpreter encounters.
These deficiencies have been previously documented in existing literature about physicians' effectiveness in medical encounters with LEP patients, indicating the importance of improving medical training in this domain. There is reason for optimism as a recent study found that 76% of accredited medical schools in the US now report having some curricular material to teach medical students how to interact with LEP patients and/or use interpreters, though how this early exposure might translate to skills at the bedside later in training and practice is unclear.
The checklists used for assessment were straightforward and required minimal advance training for interpreters, SPs, and faculty. While residents were pleased with the formative feedback they received, utility of the feedback to the residents might have been optimized had it been provided in real time. Since our evaluation was part of a study, providing timely feedback was not possible for this project but could be easily worked into future iterations of the OSCE as a purely educational exercise. This is especially true of the case-specific checklists, since there was much room for improvement among residents. The casespecific checklists were lengthy and time intensive to complete and would likely be difficult to score in real time (as opposed to video review) simultaneously with the FORS. In future iterations, instead of deliberately scoring the case-specific checklists, educators could use them as feedback tools to highlight the psychosocial richness that may be uncovered through optimal patient-centered use of medical interpreters.
Finally, the significant difference in IIRS scores for resident participants in the abdominal pain versus back pain cases likely reflects individual differences in scoring style of specific SPs rather than a true difference in performance or difficulty between the cases. No differences between the cases were identified by the other checklists. Additionally, with participants scoring in the 4-5 range on the Likert scale for both cases, the significant difference found on the IIRS may not be educationally meaningful.
The measurement of effectiveness for this OSCE is limited by its size and scope. The findings were drawn from a small number of resident participants as this implementation served as a pilot project. Additionally, while we did ask residents to self-reflect on their growth in comfort interacting with non-English-speaking patients, it was beyond the scope of the project to assess whether having been exposed to these cases impacted the residents' practice behavior in real clinical settings. In addition, although the validated instruments we chose (IS, IIRS, FORS) were designed to be applicable to encounters with patients who speak any non-English language, we limited our SPs to native Spanish speakers, as part of our assessment for resident participants in our Spanish language learning course. Our findings may not be generalizable to the use of the case scenarios in other languages.
In the future, we hope to offer this OSCE to the entire YRSI group in tandem with small-group discussion sessions or self-directed learning on best practices in utilizing interpreters. Given the straightforward nature of the medical topics contained in the OSCE, we plan to evaluate its appropriateness for use with medical students at our institution. We hope that these SP cases, if implemented as part of a broader curriculum, may allow for more effective education of graduate and undergraduate medical learners on interaction with LEP patients.