Constructing a Shared Mental Model for Feedback Conversations: Faculty Workshop Using Video Vignettes Developed by Residents

Introduction Providing feedback is a fundamental principle in medical education; however, as educators, our community lacks the necessary skills to give meaningful, impactful feedback to those under our supervision. By improving our feedback-giving skills, we provide concrete ways for trainees to optimize their performance, ultimately leading to better patient care. Methods In this faculty development workshop, faculty groups used six feedback video vignettes scripted, enacted, and produced by residents to arrive at a shared mental model of feedback. During workshop development, we used qualitative analysis for faculty narratives combined with the findings from a focused literature review to define dimensions of feedback. Results Twenty-three faculty (physical medicine and rehabilitation and neurology) participated in seven small-group workshops. Analysis of group discussion notes yielded 343 codes that were collapsed into 25 coding categories. After incorporating the results of a focused literature review, we identified 48 items grouped into 10 dimensions of feedback. Online session evaluation indicated that faculty members liked the workshop's format and thought they were better at providing feedback to residents as a result of the workshop. Discussion Small faculty groups were able to develop a shared mental model of dimensions of feedback that was also grounded in medical education literature. The theme of specificity of feedback was prominent and echoed recent medical education research findings. Defining performance expectations for feedback providers in the form of a practical and psychometrically sound rubric can enhance reliable scoring of feedback performance assessments and should be the next step in our work.


Introduction
The ultimate goal of assessment practices in health professional education is improved health care. Highquality, credible feedback is necessary for assessment to provide a meaningful mechanism through which physicians can be expected to grow and provide better care. Feedback is fundamental to everything physicians do-it is an essential part of every framework, every curriculum, every teaching interaction.
When we explored the experiences of physical medicine and rehabilitation (PM&R) residents who were part of a structured program of self-assessment followed by faculty feedback, we found that both residents and faculty receiving feedback felt they were not ready for the challenges encountered in reallife feedback conversations. The impetus for improving feedback-givers' skills was born out of the study findings, as it turned out that learners were not easily fooled by substandard feedback. Interestingly, a similar theme emerged from a mixed-methods study that focused on faculty receiving feedback from residents. These combined findings across specialties and professional roles reinforced our belief that development of feedback-giving skills is an underexplored area of medical education delivery and scholarship, strengthening which may allow us to take another step towards improving physicians' performance and, ultimately, providing better patient care.
Meaningful and impactful feedback conversations are not easy for most feedback providers. On the one hand, faculty members' tensions impact not only the feedback process-reaching the right balance of constructive and positive feedback, dealing with their own perceptions of self-efficacy, as well as the receptiveness, insight, potential, and skill of the residents-but also the resident-faculty relationship and contextual factors. On the other hand, faculty from both university-based and community-based programs reported inadequate training and incomplete understanding of the best ways to deliver feedback despite the availability of excellent practical guides. This does not appear to be just a perception issue-a recent qualitative study of simulated feedback encounters suggested that faculty skills do not in fact match recommended practices in several areas.
A number of curricula to improve faculty feedback-giving skills have been described by others, including several MedEdPORTAL publications. Some of these focus on peer or near-peer feedback. For example, Brown, Rangachari, and Melia developed an interactive multimodal feedback coaching workshop for residents giving feedback to interns. Tews and colleagues created a course to improve resident skills in providing feedback to medical students. Others have experimented with improving faculty feedback skills. For example, Schlair, Dyche, and Milan described a longitudinal faculty development program designed by faculty leaders and found positive impact on feedback quality as perceived by residents. Sargeant and colleagues described a workshop based on a thoroughly researched coaching model of feedback.
Although the imposition of predeveloped frameworks and top-down faculty development is tempting and may be an efficient approach, the very process of creating the framework and developing a holistic shared mental model may be essential for learning and participant buy-in. Our goal, therefore, was to develop a workshop that would allow faculty groups to construct a shared mental model that was locally grown and sensitive to specialty and institution, yet based on general principles supported by medical education.
This workshop adds to the existing body of work in one other way. Other scholars have identified variability in feedback recipients (four resident challenges) and suggested adjusting feedback provider approaches accordingly. Since our workshop is based on scenarios that were selected, scripted, and enacted by feedback recipients (residents), it allowed us to also explore variability in feedback provider (faculty) behaviors. We believe that using more than one perspective in developing a shared mental model for feedback allowed us to highlight multiple facets of the feedback construct and understand it more fully.
Most importantly, in addition to promoting a deeper understanding of the development of the knowledge, attitudes, skills, and habits necessary for providing meaningful and impactful feedback, improving faculty feedback-giving skills may allow us to take another step towards improving physicians' performance and patient care.

Local Feedback Context
After surveying residents in our large (N = 36) PM&R program regarding feedback they had received in 2015, we implemented a feedback bundle that included resident self-assessment, a faculty assessment, and a joint conversation. This process was facilitated by using an iPad app called PRIMES (locally developed based on the RIME [Reporter, Interpreter, Manager, Educator] framework, with the addition of Professionalism and procedural Skills). Using their iPad, the residents first self-assessed (on their own) across the PRIMES dimensions and created three learning goals. They then met with, and passed the device to, the faculty, who assessed the resident blindly, using the same framework. After faculty submission of their assessment, the app compared resident self-assessment and faculty assessment, resulting in visual highlighting of areas of agreement and disagreement. This served as a starting point for a feedback conversation. Each resident was required to engage in this process at least once a month and was encouraged to do so mid-rotation. This was reinforced during semiannual meetings with the program director and via emails from chief residents.
During 14 monthly rotations, 48 residents and 16 faculty members completed 343 PRIMES encounters. Each faculty member participated in a median of four encounters. Average resident compliance with the once-a-month requirement during the same time period was 71%, with most feedback encounters clustering during the last week of the rotation.

Development of Feedback Vignettes
A resident developed six vignette scripts based on some of the issues identified in our qualitative studylack of faculty engagement or time (Distracted Attending, Impersonal Attending), challenging resident behaviors (Cocky Connor, Defensive Debbie, Self-Effacing Sammy), and a portrayal of a constructive conversation from the resident perspective (Appendix B). The scripts were reviewed by one of us (Alex Moroz), who made minor edits. We subsequently recruited two residents (Baruch Kim, Anna King) and a faculty member (Heidi Fusco), who enacted the six vignettes. These were recorded and produced (Appendices C-H).

Workshop Implementation
We conducted this 1-hour workshop (in small groups) with faculty in the departments of PM&R and neurology who were feedback providers for our residents (using the PRIMES feedback bundle described above). For convenience, faculty at each clinical site (e.g., inpatient brain injury) or resident rotation (e.g., electrodiagnosis) formed distinct groups. All workshops were conducted by one of us (Alex Moroz), who took handwritten notes documenting the faculty discussion.

Development of a Shared Mental Model
To accomplish the goal of developing a shared mental model of useful feedback conversations, each group of faculty viewed the video vignettes and afterward had a discussion that focused on faculty behavior in the vignette, aiming to answer the following two questions: "What did faculty do well?" "What could faculty do better?" Qualitative Analysis and Literature Review Group responses were recorded as positive statements to distill strong feedback practices (e.g., "She did not maintain eye contact" became "maintaining eye contact") and scanned to qualitative analysis software (ATLAS.ti V.1.5.3; Scientific Software Development GmbH, Berlin, Germany). Using segments of text identified in the notes, we defined coding categories and created a coding scheme that identified individual codes and their relationships. We reached consensus by discussing this scheme and individual coding categories.
In order to ensure both sensitivity to specialty and institution context and conformance to general principles supported by medical education theory, we conducted a focused literature review and used the results to group our items (codes) into broader dimensions of feedback.

Session Evaluation
Faculty participants received an online session-evaluation survey (three items, developed by the authors) focusing on the first two levels of Kirkpatrick's hierarchy, as well as suggestions for improving the workshop (Appendix I). After analyzing responses, the session evaluation was revised to include five questions and better align with educational objectives.

Materials/Logistics/Setup Needed for Implementation
In order to implement this workshop locally, the following materials and local resources are necessary: • Buy-in and support from division head or department chair.
• Faculty small-group facilitator and faculty champion (can be the same person).
• Support staff to schedule workshop and remind faculty.
• Sufficient space to accommodate small-group discussions.
• Working audiovisual equipment capable of playing video on a sufficiently large screen.

Results
Twenty-three faculty members participated in seven small-group workshops. Average group size was 3.3 (range: two to five participants), and there were 16 male and seven female faculty participants. Of the seven workshops, six were with PM&R faculty groups, and one was with a neurology faculty group.
Initial qualitative analysis of group discussion notes yielded 343 codes that were collapsed into 25 coding categories (Table). Each cell represents the number of times faculty (all groups) mentioned a particular coding category during discussion of the case in question. After incorporating the results of a focused literature review, we identified 48 items that we grouped into 10 dimensions of feedback, supported by both our data and published literature ( Figure; see Appendix J for the supporting literature references). Faculty groups identified 11 out of 32 items discussed in the literature (indicated by italics in the Figure). The theme of specificity of feedback emerged as a prominent finding when we explored how the coding categories distributed across the feedback vignette scenarios (Table). Faculty agreed that the cocky resident needed confronting, the defensive resident needed specific examples, and the self-effacing resident needed exploration and support.
Four participants completed the online session-evaluation questionnaire (Appendix I). All reported that they liked (two) or liked a lot (two) the workshop's format and thought they were better at providing feedback to residents as a result of the workshop (four). Recommendations for improvement included providing more videos on good feedback technique, including an optimal feedback session, "keeping it up," and removing the titles of the videos as they might bias the viewers.

Discussion
In this faculty development workshop, small groups of faculty viewed feedback video vignettes together and arrived at a shared mental model of feedback. We used qualitative analysis for faculty narratives combined with the findings from a focused literature review to define dimensions of feedback and shared these with the participating faculty.
In a short time (1 hour), small faculty groups were able to develop a shared mental model of dimensions of meaningful and impactful feedback that was also grounded in medical education literature.
While individual group discussions varied, the characteristics of feedback that each group thought to be most valuable were similar across all groups.
Faculty groups identified 11 out of 32 items discussed in the published literature (Appendix J). The largest number of missed items clustered around the dimension of Starting With Self-Assessment, perhaps because the structured feedback encounter automatically included resident self-assessment and was taken for granted as part of the process by the participating faculty. Another explanation is that physicians, similar to other professions, do not accurately self-assess or include this process in their daily practice or cultural norms. There were two dimensions (Preparation, Engagement, Investment and Confidence, Direction, Correction) that received more attention from the participating faculty than from the previously published literature. We hypothesize that this may have been a result of the specific scenarios selected for video vignettes. However, considering that these vignettes were created by residents and deemed representative of relevant feedback interactions, it is likely that these dimensions will be generalizable to other settings.
In fact, the theme of specificity of feedback not only appeared as one of the 10 dimensions (Individualizing the Conversation), it was also a prominent finding when we explored how the coding categories distributed across the feedback vignette scenarios (Table). Faculty agreed that the cocky resident needed confronting, the defensive resident needed specific examples, and the self-effacing resident needed exploration and support. This closely echoes the findings of Roze des Ordons, Cheng, Gaudet, Downar, and Lockyer, who explored the challenges that faculty experienced and the approaches taken in adapting feedback conversations to different residents. While the faculty questioned their ability, they were able to adapt their approach to feedback, drawing on techniques of coaching for highly performing residents, directing for residents demonstrating insight gaps, mentoring and support for emotionally distressed residents, and mediation for overly confident residents.
Our work has several important limitations. As the workshop was conducted within a single institution and specialty, the items and dimensions identified may not be directly generalizable to other settings. Similarly, we decided not to change or remove the vignette titles as these represented our resident perceptions of the process. We think that including findings from a focused literature review may have mitigated this somewhat. In addition, feedback is a universal skill necessary across all specialties, and therefore, it would be expected that other settings would encounter similar challenges. Similarly, our work was limited to the 21 22 graduate medical education setting, and we do not know if our findings are generalizable to the contexts of undergraduate or postgraduate medical education or broader health professions education.
There were also challenges we encountered in our program evaluation approach. The response rate (four out of 23) was rather low, and in retrospect, we should have followed the initial survey invitation with several reminders (lesson learned). Additionally, our session-evaluation questionnaire focused on lower levels of Kirkpatrick's hierarchy, rather than on changes in faculty behaviors or, better yet, resident behavior changes or patient outcomes. While we acknowledge the rising difficulty of measuring outcomes with each step within the hierarchy, we also recognize the matching increase in the meaningfulness of the findings.
We believe that while these findings have deepened our theoretical understanding of the dimensions of feedback, defining performance expectations for feedback providers in the form of a practical and psychometrically sound rubric can increase reliability of scoring for feedback assessments and may be the next logical step in our work. Although rubrics may not ensure validity of judgment of feedback ratings per se, they can potentially promote learning and make teaching feedback providers easier by clarifying both the criteria and the expectations, thereby facilitating feedback and self-assessment.
We offer the following recommendations to readers who might consider replicating this workshop. We do not think that a formal analysis of the participants' discussions, as was done here, is necessary after each workshop; on the other hand, explicit and frank discussion and local consensus-building process is paramount. By limiting the size of small groups to three to seven participants, the workshop can be scaled up to any number of faculty by manipulating the number of groups. We do not think that increasing group size is as effective because it would be difficult to insure engagement of all group members in larger groups. Finally, we found our workshop evaluation response rate to be very low with a single administration of the survey; we suggest at least two strategically timed reminders after the initial request for feedback.