Team Based Learning: Preterm Labor (PTL), Preterm Premature Rupture of Membranes (PPROM) and Medical Complications of Pregnancy
|9668||January 19, 2014||1|
This team based learning module will instruct obstetrics and gynecology clerkship students on diagnosis and initial management of pregnancy complications. This module is ideally suited for an active learning curriculum in which students are accustomed to preparing ahead of time and working through questions and cases in the classroom. The module includes a Readiness Asssessment Test to be taken individually and as a group, an Application Test to be taken as a group and the various forms we use logistically to deploy the module. The advantages of this module are: 1) less faculty preparation time is required 2) students are actively engaged in application of the material 3) no traditional lecturing is required. Students are assessed using multiple choices questions and their group interactions are assessed by their peers. This module can be scaled to any size group greater than a minimum of 10 since students form groups of 5-7 for the exercise.
At the completion of this module, students will be able to identify the risk factors, consequences, assessment, diagnosis and treaments for PPROM. Students should also be able to identifying risk factors and effective interventions for PTL. Students should be familiar with surgical complications unique to pregnant patients, and be able to assess, diagnose and treat pregnant patients with gestational diabetes, gestational hypertension, preeclampsia, eclampsia and HELLP syndrome. Finally, students should have a working knowledge of maternal infections that could be transmitted to the neonate, including diagnostic assessments, transmission risk, prevention and prophylaxis strategies and possible complications associated with transmission.
Chuang A, Wang N. Team Based Learning: Preterm Labor (PTL), Preterm Premature Rupture of Membranes (PPROM) and Medical Complications of Pregnancy. MedEdPORTAL Publications; 2014. Available from: https://www.mededportal.org/publication/9668 http://dx.doi.org/10.15766/mep_2374-8265.9668
- To define the following conditions: premature rupture of membranes, premature PROM, preterm labor, gestational diabetes, gestational hypertension, preeclampsia, eclampsia and HELLP syndrome.
- To describe how to assess, diagnose and clinically treat each of the conditions listed above.
- To describe surgical complications, limitations and characteristics unique to pregnancy.
- To describe the different infections pregnant women are screened for.
- To have a working knowledge of the transmission risks of the infections.
- To select appropriate prophylaxis and prevention interventions and treatments for each of the infections as applicable.
- To describe the risks that each of the infections poses to the neonate.
- To apply this to clinical scenarios.
- Team-Based Learning, TBL, Preterm Labor, Obstetric Labor, Premature, Preterm Premature Rupture of Membranes, Fetal Membranes, Premature Rupture, Medical Complications of Pregnancy, Pre-Eclampsia, Diabetes, Gestational
Obstetrics & Gynecology
Maternal & Fetal Medicine
- Maternal & Fetal Medicine
- Obstetrics & Gynecology
Interpersonal & Communication Skills
Knowledge for Practice
Evidence Based Practice
- Communication Skills
Team-based Learning (TBL)
- Clinical Skills/Doctoring
- Medical Student
Authors & Co-Authors
Alice Chuang, MD
University of North Carolina at Chapel Hill School of Medicine
University of North Carolina at Chapel Hill School of Medicine
Effectiveness and Significance
We feel TBL enables students to be more active learners during the ob/gyn clerkship than what we previously used which was powerpoint driven didactic. We have worked hard during our 6-week clerkship to create a series of active learning sessions to “flip the classroom.” We hope this method will deliver material effectively and utilize the precious time we spend with students to work on clinical reasoning skills and the nuances of clinical care. Our are modeled after the process described by Parmalee et al (2001).
The implementation from a faculty and resources standpoint works well. Only one facilitator is needed for the whole session, usually myself or a generalist or even at times a highly skilled administrative assistant, and a content expert, who does not need to prepare any materials. This overall has been a rewarding experience for students, the facilitator and the content expert.
Special Implementation Guidelines or Requirements
The major limitation is time, which not only dictates how many readiness assessment questions and application questions can be included as part of the module but how many modules can be included in the series during one clerkship. Additionally, the scope of the readiness assessment and application questions can be improved upon and expanded either to include more subtleties or to involve more components in the answer.
Our hope is that by publishing our modules and the accompanying results, we will be able to garner more feedback from other institutions and experiences in order to better improve and expand upon the topics, the questions and the application of the TBL.
Additionally, sometimes the students are uncomfortable with the peer feedback portion of the exercise to the point of NOT observing the guidelines and assigning 10's to all their peers. Overtime, we have observed that students become more familiar and comfortable as this has become more common across our school's curriculum
To date, 247 students comprising 44 teams have completed the TBL module, with an average of 68.1% (range 58-81%) on the iRAT, 94.9% (range 83-100%) on the gRAT, 62.3% (range 20-100$) on the App. The gRAT items missed most frequently are graphed below:
Question # /# times missed by teams in gRAT
Appeals have been entertained for questions 6, 13, 14 and 16. Appeals have been approved for these questions, usually small changes in wording. Student initiated questions and discussion frequently centers around discussion of questions 5, 6, 11, 12, 13 and 16. Though 13 is missed frequently, I do not believe it is a bad question, rather it is important information that the students are not familiar with.
As far as App, the students rarely miss #5, rarely get #4 correct, but usually hotly debate 1-3, with an almost even distribution. For question #1, the teams argue between choices c and d. For question 2, they argue between a and b. For question 3, they argue between a and b. For question 4, they often fail to discover that maternal second trimester anemia is not a proven cause of intrauterine growth restriction (IUGR) and choose b and c, rationalizing that Methimazole is not linked to IUGR and that HIV viral load every 4 weeks does not change outcomes. They fail to discover in their research that hyperthyroidism is linked with IUGR and often forget the viral load can help with medication adjustment for maximum disease control in pregnancy.
One of the first pit-falls we encountered in our institution deploying the module was how to create the two separate groups. Initially, we separated the groups based on last names, but soon realized that this led to clumping of certain Asian students who had the same last name, a trend that was pointed out by the students involved. Next, we let the students create their own two groups, but observed that sometimes this led to unequal participation within the groups as some friends would clump together and overtake the group conversation. Finally, we settled on having students line up according to distance of birth from our hospital and then having them number off.
The second obstacle that we encountered was when students were asked to vote to decide how each of the three scores (iRAT, gRAT and peer assessment) would contribute to the final individual score. Many of the student groups wanted to weigh the gRAT scores as high as possible compared to the iRAT and the peer assessment scores. We suspect that this trend was largely driven by the fact that group scores tend to be higher than individual scores, as students can benefit from the group’s aggregate knowledge and certainty. Additionally, if the gRAT is the main component of the individual score, students will more likely all have a higher individual score, which will “look” better for the students despite the fact that this doesn’t change the competition for the top two individual scores. In response to this trend, we have adopted a rule that specifies that the iRAT scores must make up at least 15% of the final individual score.
One interesting aspect of the TBL scores and outcomes observed has been in the way students have handled assigning their peer assessment scores. The directions for the assessment states explicitly that 10 points are allotted for each team member, excluding the student who is completing the evaluation, but students are not allowed to assign 10 points to each teammate. This means that the evaluator must assign at least one pair of differing values (i.e. assign one team member 11, another one 9 and the rest 10 or some other varying combination) that reflects at least one member’s exemplary contribution and another’s less than exemplary contribution.
In a few cases, students ignored the directions and assigned 10 points to each of their team members, effectively nullifying the peer assessment score as a factor in who the two “winners” are. Another common pattern observed was groups where students all agreed to give the peer who is sitting to their right 11 points and the peer who is sitting to their left 9 points and all others 10 points, in order to once again effectively nullify the value of the peer assessment score as a factor in who ultimately “wins”. This occurred despite observable differences in student contribution and preparedness within the groups. Both of these instances reflect the strong and deep aversion that students have towards publically criticizing and grading their peers, especially in the context of medical school where their peers have been and will stay their peers for a substantial number of years and may even play some role in their careers later in life.
Most interesting, however, was one group who used the peer assessment scoring as a strategic way to ensure that the two top individual winners would be members from their own team.They did this by identifying the two members from their team who had the two highest iRAT scores and then splitting the team’s peer assessment points equally between the two members and giving everyone else 0 points. This strategy allowed the team members to cleverly side-step the uncomfortable task of critically evaluating one’s peers but to still follow the directions completely. Additionally, the strategy, which worked, fostered team-spirit and team-bonding. We have yet to see this done more than once, so we have not addressed it by changing any of the rules or directions.
This information is made available under the Creative Commons license.