Clear Communication

Analyzing and Testing Rubric Criteria Scalability

The Analysis

The New Product development team elevated a request to update the “Articulation of Response” (AOR) rubric criterion row across the university’s course catalog based on university-wide goals related to plain language and linguistic diversity. A working group developed a new “Clear Communication” (CC) row which went through initial, small-scale moderated and unmoderated student and faculty user testing. The positive trends indicated in this testing warranted further development of the CC row.

The Process

As the Instructional Designer for the project, I analyzed potential courses for use in an A/B in-term test considering elements such as the number of impacted department verticals, variety of course level, and number of assignments in each course. I also coded the language variations of the AOR rows across the impacted assessments. Additionally, I proactively addressed the use of “audience” (who the learners are writing for) in the guidelines associated with each assessment. While it was out-of-scope to introduce an additional variable regarding audience to the A/B in-term testing, noting and coding the various audiences named or implied in assessment guidelines would be an important consideration if the CC row was scaled to the full university catalog.

Adjacent to the A/B in-term testing, my analysis indicated a need to “norm” the grading of the CC row since the prior UX testing focused on the satisfaction with the row versus actual application of the row to assessment. For this “norming”, the design team reached out to the Faculty Advisory Board (FAB) and arranged for moderated and unmoderated faculty grading of “student” sample papers using the CC row versus the most common AOR row. I developed the “student” sample papers and coordinated with the UX team and the Center for Online Teaching (COLT) team to develop survey questions for the FAB sessions as well as for the A/B in-term testing.

Deliverables for this project included, but were not limited to: a design memo regarding audience treatment in assessment guidelines; documentation of the test within the impacted courses; facilitation guides for instructors; toolkits for Deans and other stakeholders regarding the test; and share out presentations after each test was complete.

Spreadsheet screenshot showing different elements tested such as different color coded types of assessments and courses.

Results and Takeaways

The A/B in-term testing reached over 5000 learners. The in-term testing found course success rates for the test group was significantly higher than the control group while course completion rate was slightly higher and not of a statistical significance. The largest takeaway was that overall learners and faculty were satisfied with the clarity, parity, and ease of use for the CC row. Grading results were inline with those “normed” in the FAB testing. The CC row has been implemented in some courses and is in the process of being rolled out university-wide.