Discussion Board Rubrics
Balancing Content Mastery with Peer Engagement
The Analysis
The General Education Commons development team elevated the potential need to improve the standard undergraduate discussion board rubric towards inclusivity and to promote an authentic exchange of ideas amongst peers about course content.
The Process
A working group with diverse representation from across faculty and staff within Learning Science and Assessment came together to explore the standard undergraduate discussion rubric. Our first task, as subdivided small groups, was to explore a variety of topics including non-standard discussion board rubrics within the university system as well as research into industry trends regarding the use of discussion board rubrics.
I directly participated in the small group research into industry trends where we analyzed Google trends and flexed our network connections. This research was shared back with the larger group, along with the other small group findings, and it was determined that there was a need to test a revised discussion board rubric.
To develop potential changes we collaborated with the UX team to produce writing samples and survey questions for use in moderated (and unmoderated) sessions with students and faculty as well as a dean focus group survey regarding their experience with existing discussion rubrics and potential rubric changes.
I collaborated with generative AI tools through potential discussion board prompts to create “student” writing samples. This usage also informed the future Articulation of Findings (AOF) as it found that the more specific a discussion prompt was the easier it was for an AI tool to generate a sufficient response.
I attended faculty and student UX sessions to troubleshoot technical issues and take notes. While UX was compiling results, I provided feedback for other sub-groups as we finalized our AOF deliverable with an eye towards in-term testing. An A/B test was conducted across a variety of course levels and verticals.
This project involved balancing a variety of stakeholders and potential scope creep from how many rubric columns should exist (three versus four) to how much to inform students that there was a rubric being tested.
Related to informing students of the test, I additionally coordinated with multimedia to develop an icon strategy (within the university’s style guide) for announcements which would call attention to reviewing rubrics in a general sense without specifically naming the discussion board rubric.
Results and Takeaways
The data results are currently under review. An initial finding does note a need to “norm” what discussions are trying to accomplish. This “norming” could inform the way we design discussions as well as impact the way instructors are trained in facilitating and grading discussion boards.