Limitations
We used Kirkpatrick’s evaluation model to assess the reaction of the learners (i.e., Kirkpatrick’s first level) [
20]. Although we recognize that this level of evidence limits our readers’ ability to extrapolate the strength of this curriculum, this was a choice we intentionally made: with 17 days to develop both a curriculum and a methodology for evaluating that curriculum, our team chose to dedicate the majority of its effort to providing our students with a strong product to fill an educational gap. It was simply not possible to simultaneously produce a robust, higher-level evaluation tool given those constraints. We were also acutely aware of student well-being during the early stages of the pandemic; students were already overwhelmed by abrupt disruptions to their lives, and we felt that administering additional or longer evaluation tools would further detract from their well-being. This concern for survey burnout is also why, at the request of the medical school administration, no open-ended questions were asked. Finally, it is important to acknowledge that 29% of learners did not opt to include their survey responses in data analysis, and the resulting sample may be biased.
Additionally, this approach to curriculum development arose during a period of educational upheaval, and there may be limitations to its applicability during “normal” times. Because learning and many other engagements were paused at the outset of the pandemic, student authors were able to devote themselves rather singularly to this project. Finally, the success of this project was contingent on the UMMS administration’s acceptance of the unconventional approach.
Concluding thoughts
With these limitations in mind, we feel that the most important contribution this piece makes to the literature is one of process: students can contribute to the development of their own curricula during extenuating circumstances, and a Kirkpatrick’s first-level evaluation of the resulting curriculum suggests that target learners view it favorably. In particular, the fact that almost all learners found the course relevant to all medical students suggests that the curriculum fills a need at the undergraduate medical education level. The relative lack of perceived relevance to residency is also to be expected, as the content was specifically catered towards the average undergraduate medical student knowledge level.
The successful implementation of this student-driven model for curriculum development illustrates that although the traditional, faculty-designed approach may be appropriate in some settings, exclusive reliance on that approach may in fact limit student learning opportunities. By incorporating students into the curriculum development process, their learning is not restricted to the receipt of a finalized curriculum alone; rather, they are able to learn while actively developing the product. Faculty remain integral to the process, but in a role that positively disrupts the traditional teacher-learner hierarchy. Rather than serving as the exclusive providers of information, faculty in this model facilitate a framework that drives students to pursue the information themselves, allowing for synergistic learning opportunities between both parties. Because the resulting product is created by students, their peers will also benefit from a curriculum that is likely more student-focused than one derived from a faculty-only team.
In light of these potential benefits to both the student developers and their peer learners, other institutions that hope to adopt this approach should be encouraged to give full autonomy to students from the project’s outset. We also recommend that peer institutions consider the use of non-traditional sources (such as social media and preprint articles) when appropriate. Not only does the incorporation of such sources allow for the inclusion of more mixed media content into the resulting course, but it also empowers students to critically evaluate the legitimacy of such sources. This is an important learning outcome in and of itself, and it also presents an additional opportunity for partnership between students and faculty as they assess the veracity of the content.
One key potential pitfall to consider is whether there is a long-term goal of publishing the curriculum to make it available to other institutions. When we began to pursue publication, we were hindered by a myriad of copyright laws that applied once we attempted to share the content with those not covered by our own institution’s copyright privileges. Finally, if other institutions have more time to develop their student-generated curricula, we would encourage the development of a more robust evaluation tool.
While our results were limited to Kirkpatrick’s first level of evaluation, the positive reactions of those who took the course suggest an interest in future related learning [
20]. We are working to deploy an online, open-access version of the content, and we will use the MAL framework to guide revisions to the pandemic curriculum. Moving forward, as our student authors and learners return to the clinical arena, their reflections will inform the curriculum updates, thereby completing the
assessing and
adjusting phases of the MAL cycle. We intend to employ more high-level evaluation tools (e.g., pre/post assessment of knowledge and skills, interviews and/or observations of students as they apply new knowledge to clinical practice, or eliciting patient feedback on student behaviors) at our own institution as we launch future planned iterations of this curriculum.