Peer Assessment of Medical Lecturing: A Faculty Development Instrument

Publication ID Published Volume
9359 March 11, 2013 9

Abstract

Peer observation of teaching has become an important element in the assessment of faculty members’ instructional skills and competence. Identifying and assessing teaching accurately involves use of reliable and validated instruments, standardized observation procedures, and peer observer training. Without training, observers may provide biased feedback that lacks sufficient detail or specific suggestions needed for teaching improvement.

In 2007, we took part in the development, research, and publication of a Peer Assessment of Medical Lecturing Instrument. The instrument consists of 11 criteria of effective lecturing and an overall lecture quality measure. We then formed an expert panel to determine a full complement of behavioral descriptors and performance standards for each of the instrument’s criteria.

Over two years, we observed and rated lectures and catalogued performance behaviors associated with each of the instrument’s criteria. This process led to the development of a Rater Training Facilitator’s Guide to accompany the Peer Assessment of Medical Lecturing Instrument. Together, the guide and instrument may be used for peer rater training as it provides faculty with detailed behavioral descriptors and guidance on conducting peer assessment of medical lecturing or large group discussions.

To study the effectiveness of Frame-of-Reference (FOR) training to teach faculty how to conduct a standardized peer assessment of medical lectures, we recruited seven, inter-disciplinary clinician educators to participate in FOR training at Beth Israel Deaconess Medical Center. Prior to training, the participants observed and rated a pre-test videotaped lecture. During the training, we provided an overview of FOR training and a thorough, detailed review of the Peer Assessment of Lecturing Instrument and Rater Training Facilitator’s Guide.

After the training, we asked the participants to rate a post-test lecture using the instrument and the guide. To determine the effectiveness of the training, these post-test ratings were compared to ratings previously determined by a panel of expert medical educators who had watched the same videotaped lectures. Preliminary results comparing the experts’ ratings and the participants’ ratings indicated closer agreement on the post-test lecture. Moreover, the participants found the Rater Training Facilitator’s Guide to be valuable in detailing exact behaviors associated with each criterion’s performance levels, and asked to keep the guide for future reference.

Citation

Newman L, Brodsky D, Roberts D, Atkins K, Schwartzstein R. Peer assessment of medical lecturing: a faculty development instrument. MedEdPORTAL Publications. 2013;9:9359. http://dx.doi.org/10.15766/mep_2374-8265.9359

Educational Objectives

  1. To encourage peer observation of teaching among faculty members resulting in specific, behavioral feedback and valuable discussions on best teaching practices.
  2. To provide faculty observers with detailed guidance on how to conduct peer observation using a standardized, reliable approach.
  3. To provide a peer assessment of teaching instrument along with behavioral descriptors and rating standards derived by a panel of medical education experts to be used for peer observer training, formative and/or summative lecture assessment, or as a reference tool.

Keywords

  • Peer Observation, Peer Review, Peer Rater Training, Lecturing, Lectures, Large Group Discussion, Peer Evaluations

Material Access

Please sign in to access this material.

Please register for an AAMC account if you do not have one.

Register

Browse Publications

Subscribe to Our Monthly Newsletter

Receive featured content & announcements!

ISSN 2374-8265