Original Publication
Open Access

Qualitative Coding Boot Camp: An Intensive Training and Overview for Clinicians, Educators, and Administrators

Published: October 26, 2018 | 10.15766/mep_2374-8265.10769

Appendices

  • Qualitative Coding Boot Camp.pptx
  • Facilitators' Guide.docx
  • Pre- and Postevaluation.docx

All appendices are peer reviewed as integral parts of the Original Publication.

To view all publication components, extract (i.e., unzip) them from the downloaded .zip file.


Abstract

Introduction: Qualitative coding is a tool for analyzing data involving strings of meaningful words. While many schools and universities have staff who can assist faculty with quantitative data analysis, qualitative data analysis is interpretive and requires both content-specific knowledge and research methodology tools. In this qualitative coding boot camp, we introduce clinician-educators, staff, and administrators to a general overview of qualitative coding and analysis. Methods: We designed and implemented an in-person training to help researchers who had limited exposure to qualitative research gain a general orientation to it. We provided an overview of qualitative data collection and qualitative coding and developed focused research questions related to sample interviews for participants to use in working together to develop a codebook. We concluded by discussing the iterative process of coding, how to work from codes to themes for a manuscript, and how to present and disseminate results. Results: To examine participants’ learning during the boot camp session, we used a series of nonparametric sign tests to compare pre- and postsession responses on our evaluation form. The results of these tests showed significant growth in participant comfort with undertaking qualitative analysis. Discussion: Qualitative coding is an important skill for clinicians and their research teams to have, as it can help them to understand the experiences of those around them through an empirical lens. With this 2-hour training, we were able to increase participants’ comfort level with the set of skills required to analyze qualitative data rigorously.


Educational Objectives

By the end of this activity, learners will be able to:

  1. Recognize, identify, and list the different approaches to qualitative data collection.
  2. Identify codes and create a codebook based on their research questions.
  3. Synthesize codes into overarching themes.
  4. Move from data to a written results section.

Introduction

Qualitative research methods are used regularly within studies of medical education and medical interventions to understand the experience and perspectives of individuals.1 However, training in analyzing qualitative data, such as interview or focus group transcripts, is not as pervasive as training for quantitative data, and there are often fewer institutional resources in place to provide support for analyzing qualitative data.

The presenters were two PhD researchers who work with many clinician-educators having limited experience conducting research. Together, we identified the need for a general primer to introduce qualitative data coding to clinician-educators. Specifically, through individual conversations with clinician-educators, as well as through a needs assessment conducted by coauthor Lindsay B. Demers, it became apparent that the process of coding was particularly daunting to novice qualitative researchers. In response, we designed this training to provide an intensive 2-hour session focused on learning the theory and skills behind qualitative coding, with time for participants to code inductively and begin to develop a qualitative codebook.

It is worth noting that at the institution at which this training was developed and run, there are many emerging resources for clinician-educators in regard to research support. These resources range from informal, one-on-one consultations with our institution’s Education Evaluation Core to formal health sciences education courses on evaluation, assessment, and research methods. While our training provides a limited exposure to qualitative coding and familiarization with terms, it is not adequate for moving from novice qualitative researcher to expert qualitative researcher. Moreover, we do not expect researchers to feel comfortable independently designing and conducting a qualitative study at the training’s conclusion. Rather, our goal is just to increase comfort with basic terms and some of the key components of qualitative coding. For this reason, we believe the course would work best for researchers who want a refresher on qualitative coding or who are trying to understand a bit more about the kinds of data that qualitative approaches yield. Furthermore, we acknowledge that qualitative coding is just one component of the broader process of qualitative data collection and analysis, but coding is an important part of the process that is time intensive and conducive to a 2-hour boot camp.

Existing materials in MedEdPORTAL offer some information about coding or the uses of qualitative research, but our training provides practical experience and technological training on current tools for qualitative coding. Previous MedEdPORTAL resources2,3 have described mixed-methods research design workshops, discussing the theory behind what sorts of questions quantitative and qualitative research can answer and how mixed methods can strengthen research projects. Both those resources focus more on the process of determining one’s research method, not on data analysis. Harris4 has provided a general overview for analyzing qualitative data, describing generalities of the uses for qualitative data and how to code for themes. Our training adds to the content presented in Harris’s workshop and provides an in-depth coding activity with which to experience inductive coding and codebook development, as well as a general overview of the computer technology available to assist with qualitative coding.

Methods

The training was run through the Department of Medicine’s Education Evaluation Core, directed by coauthor Lindsay B. Demers. The Education Evaluation Core runs a variety of trainings for faculty, trainees, and administrators concerning research methods and data analysis on campus. We advertised the boot camp to medical educators, research staff, and other administrators who had conducted or expected to conduct qualitative research and wanted a general introduction to qualitative analysis and coding techniques. The training was run once in the morning and once in the afternoon to increase the likelihood that interested parties could fit it in around their clinic schedules. Approximately 6 weeks before the training, a calendar invitation was sent to all the education researcher teams in the department through Microsoft Outlook, and attendees signed up simply by accepting the Outlook invitation. The training was held in a conference room in a main instructional building that was familiar to all attendees.

As noted above, we offered the training to all clinician- and scientist-educators as well as their research support staff and administrators. We did not limit this training to faculty alone because we recognized that research and administration staff often undertook analysis of qualitative data, be it for quality improvement activities or for scholarly research. We asked that interested participants sign up in advance so that we could limit the number of attendees to 12-15 per session. Although the session could be conducted with a larger group if necessary, we wanted to engage in discussion with all groups and to answer any particular questions they had about their projects.

Attendees were not required to do any reading in advance, nor were they required to have any specific training or baseline knowledge. Ideally, attendees should have had exposure to the theory behind qualitative research given that this presentation focused primarily on the coding aspect of qualitative data analysis. Because we expected that not everyone would have a complete knowledge of qualitative research, we did include a brief overview of the overall theory. When implementing this resource at other institutions, facilitators should be experienced with qualitative analysis so they are able to effectively answer questions that go beyond the scope of what is presented in the PowerPoint slide deck (Appendix A).

The session was approximately 2 hours long. The time spent on each activity is provided in the attached facilitators’ guide (Appendix B), and the discussion points highlighted during each slide are provided in the slide deck. Tables in the conference room were set up in a large square that was open at the back so participants could all see the presentation and talk with their groupmates across the table. As presenters, we began and ended the session by administering an evaluation (Appendix C). When participants broke into small groups, we distributed three different research questions to orient the participants’ reading of the example interviews. Participants independently read the interviews and started developing potential inductive codes, and then worked together in their small groups to begin the process of developing the codebook. After small groups had the opportunity to code the texts individually and have a group discussion, they reconnected as a larger group, and we led a discussion on how the process had gone and what participants had learned from the experience. We went on to describe some strategies to continue analysis and framing for a manuscript. We discussed the benefit of multiple coders for increasing validity and reliability. We also raised issues related to presenting qualitative data (e.g., quotes from interview transcripts) in medical journals with low word counts. We ended the discussion by introducing some of the benefits of qualitative data-analysis software, giving a brief preview of the utility of the software for organizing files.

Example Data Set
We designed the training to be adaptable to any qualitative data set the facilitators could access. With the identified qualitative data set in mind, facilitators had to develop qualitative research questions that could be answered with the data. We opted to break our ~15 participants into small groups of four to five, and so we identified three research questions, one for each group. Within the example data set, facilitators needed to identify two to three interview transcripts that contained information relevant to the research questions. We found that two interviews approximately six pages in length were at the upper limit of what people felt comfortable reading and analyzing in the 40 minutes dedicated to individual and small-group work.

In our implementation of this workshop, we used the example data set that comes with the NVIVO software (Version 11, 2015, QSR International) and focuses on the economy and ecology of a small fishing town in North Carolina. We intentionally chose a topic outside of medicine so that clinicians, scientists, researchers, and administrators would all come to the table with relatively similar levels of knowledge. In a less diverse group of participants, a topic closely related to areas of specialty would likely be equally as effective. If training facilitators do not have access to publicly available qualitative interviews, most qualitative data-analysis software programs have sample texts for use. Facilitators could also consider using other publicly available transcripts (e.g., congressional hearings or news interviews).

Materials
To successfully implement this workshop, facilitators must have the following materials:

  • Hard copies of the slide deck to hand out to participants.
  • A computer that has the PowerPoint slides and NVIVO.
  • A projector and screen on which to project the slides and NVIVO tutorial.
  • Hard copies of the interviews and research questions for participants.
  • Optional but suggested: colorful markers or pens for participants to use while coding the interviews.
  • A published qualitative study to give participants so they have an example of what a rigorous write-up should look like. We used one by Childs, Laws, Drainoni, et al.5

Evaluation
The pre- and postevaluation forms we used were identical. The items used a 5-point response scale to assess comfort with a range of qualitative analysis tasks aligned with our learning objectives. To link pre- and postsession responses anonymously, we asked respondents to create a unique identifier based on their mother’s maiden name, their birth month, and the street where they were raised. To protect participants’ anonymity, we did not ask them to identify their role in the department on the evaluation form. However, if a larger group is involved, facilitators may wish to ask this question of attendees.

Although the evaluation form did not contain any formative questions regarding participant satisfaction with the workshop and/or ideas for improvements, we solicited this feedback informally from attendees after their participation. Based on the feedback we received, the only change we made after the first implementation was to provide a hard copy of the slides in addition to the other handouts we distributed.

Results

To ensure anonymity of respondents when gathering evaluation data, we did not ask them to report their role within the department. However, each session was approximately 75% faculty and 25% administrators/research support staff. In total, 17 learners participated in the training across two sessions.

As described above, our evaluation of the training’s effectiveness consisted of a pre- and postevaluation form. In both questionnaires, respondents were asked to indicate their comfort with a variety of qualitative research tasks on a 5-point scale (1 = Very Uncomfortable, 5 = Very Comfortable). In the postsurvey, respondents were also asked to list two things they had learned from attending the training. Pre- and postevaluation data were linked using an anonymous, unique identifier created by respondents.

To assess pre- to postsession change, we used a series of nonparametric sign tests. For ease of interpretation, we present pre- and postsession means alongside the corresponding sign-test results for each qualitative research task about which we surveyed participants (Table). Although we saw statistically significant gains in each area of interest, there were two instances in which scores decreased from pre- to postsession. We hypothesize that this was a result of people assuming a higher level of comfort prior to learning the complexities of qualitative coding.

Table. Results of Analysis of Pre- and Postevaluation Forms (n = 17)
Task
M (SD)
Pre/Post Differences
p
Pre
Post
Negative
Tie
Positive
Developing initial codes
2.24 (1.39)
3.59 (0.94)
0
6
11
.001
Refining and organizing codes
2.24 (1.35)
3.65 (1.00)
0
5
12
<.001
Creating a codebook
1.88 (1.11)
3.24 (0.97)
0
3
14
<.001
Independently coding qualitative data
2.12 (1.36)
3.59 (1.18)
0
4
13
<.001
Reviewing independently coded data for consistency
2.00 (1.17)
3.00 (1.12)
0
6
11
.001
Analytically moving from codes to themes
1.88 (1.17)
3.12 (1.17)
1
4
12
.003
Writing up qualitative results for publication
2.29 (1.45)
3.06 (1.20)
1
5
11
.006

Discussion

Qualitative research is an important methodological tool for medical professionals. Regular, rigorous training on qualitative data analysis can assist clinicians in analyzing and disseminating their research. Effective qualitative coding requires training in best practices regarding how to approach the coding process, tips on how to evolve codes into broader themes, and suggestions for how to organize themes in manuscripts.

Based on our evaluation data from the implementation of this workshop, which were closely aligned with the training learning objectives, we are confident that the workshop is effective at achieving its goals. While we do not expect participants to be experts at qualitative research by the end of the program, we believe that through exposure to some fundamental concepts related to qualitative research and by working through the process of preliminarily coding qualitative data, participants will gain comfort with the concepts and skills related to qualitative research. With regard to generalizability, because we implemented the workshop across a wide range of medical education personnel (from clinicians to administrators), the results of our evaluation are likely to be replicated with a diverse group of learners, especially as the training requires no prerequisite research knowledge. The biggest challenge we encountered in running the boot camp concerned scheduling. Clinicians and their staff have especially busy schedules. Taking this into account, we sent out invitations approximately 2 months in advance to ensure that as many interested learners could attend as possible. We also recommend running the session a few times and on different days to better accommodate a diverse group of medical education personnel.

The limitations of our training program are grounded in the nature of it being a boot camp. Some participants recommended that we expand the boot camp into a longer 3-hour session or into two 2-hour sessions. While our evaluations indicate that individuals gained comfort about the qualitative coding process, we acknowledge that additional training beyond this workshop is necessary for a novice researcher to rigorously design and conduct a qualitative research project. In addition to regular workshops, our institution’s Education Evaluation Core is able to offer ongoing support for researchers throughout the process and can provide follow-up to the materials provided in this session. There is also a master’s program in health sciences education at our school through which researchers can receive in-depth exposure to qualitative methods. Because no 2-hour training can cover the entirety of qualitative research, we recommend that participants have at least some familiarity with qualitative research generally, with a goal of enriching their understanding of how qualitative coding works and what sorts of data they should expect when undertaking a qualitative study.

In the future, we plan to continue developing and refining trainings and resources for clinician-educators to research and evaluate their ongoing work. Developing additional training related to other data-collection, analysis, and dissemination work is a continuing goal.


Author Information

  • Ellen Childs, PhD: Research Scientist, Health, Law, Policy and Management Department, Boston University School of Public Health
  • Lindsay B. Demers, PhD: Assistant Professor, Education Evaluation Core, Department of Medicine, Boston University School of Medicine; Director, Education Evaluation Core, Department of Medicine, Boston University School of Medicine

Disclosures
None to report.

Funding/Support
Dr. Childs reports grants from the American Heart Association/National Institutes of Health outside the submitted work.

Ethical Approval
The Boston University Medical Campus Institutional Review Board approved this study.


References

  1. Ringsted C, Hodges B, Scherpbier A. “The research compass”: an introduction to research in medical education: AMEE Guide no. 56. Med Teach. 2011;33(9):695-709. https://doi.org/10.3109/0142159X.2011.595436
  2. Schifferdecker K. When quantitative or qualitative data are not enough: application of mixed methods research in medical education. MedEdPORTAL. 2008;4:1146. https://doi.org/10.15766/mep_2374-8265.1146
  3. Blanchard R, Scott J. Connecting mixed methods as an education research strategy. MedEdPORTAL. 2014;10:9670. https://doi.org/10.15766/mep_2374-8265.9670
  4. Harris I. Analyzing qualitative data. MedEdPORTAL. 2006;2:227. https://doi.org/10.15766/mep_2374-8265.227
  5. Childs E, Laws MA, Drainoni M-L, et al. Caring for young children with asthma: perspectives from urban community health centers. J Urban Health. 2017;94(6):824-834. https://doi.org/10.1007/s11524-017-0186-6


Citation

Childs E, Demers LB. Qualitative coding boot camp: an intensive training and overview for clinicians, educators, and administrators. MedEdPORTAL. 2018;14:10769. https://doi.org/10.15766/mep_2374-8265.10769

Received: March 30, 2018

Accepted: October 3, 2018