I’d like to thank Dr. Traci Cihon for introducing me to rubrics. Her rubrics are still the best.
In behavior analysis, we often focus on individualized interventions and create assessments specific to those interventions. This process is very effective, but sometimes having a standardized rubric to assess skills can be beneficial. Specifically, developing a rubric for grading learners’ writing submissions can help instructors use consistent grading criteria and give learners skill-based feedback.
One of the requirements in an undergraduate curriculum is learning to write in American Psychological Association (APA) style. Learners review the formatting guidelines and practice using those guidelines to write about research. As part of building these new scientific writing skills, learners need detailed feedback about which guidelines they’re following well and which still need improvement. Rubrics are one way that instructors can provide this information and communicate to learners what criteria they need to meet for mastery levels of performance. Rubrics have been used successfully with various learners and skills: to help learners in second grade improve their writing skills on informational texts (Clark et al., 2021), pre-service teachers improve their topic-specific professional knowledge and content knowledge (Kind, 2019), master’s students improve their skills in writing systematic reviews of the literature (Gamel et al., 2018), and faculty mentors improve their mentorship of undergraduate research students (Boysen et al., 2020). Some rubrics have also been developed to evaluate different aspects of single-case research designs (e.g., Hitchcock et al., 2015; Maggin et al., 2014).
Analytic rubrics are a type of assessment tool that resemble a grid, not unlike the performance matrices used in performance management (e.g., Eikenhout & Austin, 2005). The skills or permanent products of those skills that the instructor evaluates appear in the rows, and the levels of achievement related to those skills are listed in the columns. Within the individual cells of the matrix, the instructor can specify more grading criteria for attaining each level of performance.
To create a grading rubric for the targeted writing skill, instructors can start with a task analysis to clarify the learning objectives and component skills involved (e.g., Resnick et al., 1973). Once an instructor knows what to teach and how they will teach those skills, they will be able to include specific information about each level of performance in the rubric. For example, the components of writing an abstract require that the learner understand what the APA guidelines are for formatting and the information that should be provided within an abstract. The following is part of an example rubric that could be used to grade an abstract based on the learning objectives:
Learning Objective | Needs improvement
1 point |
Adequate performance
2 points |
Mastery performance
3 points |
APA style page formatting | Met 1/3 requirements: Margins are 1 inch all around, page number appears in header, and page number is right aligned | Met 2/3 requirements: Margins are 1 inch all around, page number appears in header, and page number is right aligned | Met 3/3 requirements: Margins are 1 inch all around, page number appears in header, and page number is right aligned |
APA style paragraph formatting | Met 1/3 requirements: Text is double spaced, text is left aligned, and paragraph does not have an indent | Met 2/3 requirements: Text is double spaced, text is left aligned, and paragraph does not have an indent | Met 3/3 requirements: Text is double spaced, text is left aligned, and paragraph does not have an indent |
APA style keywords | Met 1/3 requirements:
Keywords line is indented, keywords is italicized, and 4-5 relevant words or phrases included |
Met 2/3 requirements:
Keywords line is indented, keywords is italicized, and 4-5 relevant words or phrases included |
Met 3/3 requirements:
Keywords line is indented, keywords is italicized, and 4-5 relevant words or phrases included |
Problem under investigation | Met 1/3 requirements:
Paraphrased the research question, operationally defined the independent variable, and operationally defined the dependent variable |
Met 2/3 requirements:
Paraphrased the research question, operationally defined the independent variable, and operationally defined the dependent variable |
Met 3/3 requirements:
Paraphrased the research question, operationally defined the independent variable, and operationally defined the dependent variable |
Results | Met 1/3 requirements:
Defined the measurement scale, described the main findings with respect to the measurement scale, and compared scores across conditions |
Met 2/3 requirements:
Defined the measurement scale, described the main findings with respect to the measurement scale, and compared scores across conditions |
Met 3/3 requirements:
Defined the measurement scale, described the main findings with respect to the measurement scale, and compared scores across conditions |
Not every learning objective will likely be worth the same amount of points, and sometimes learners will not include information that addresses a relevant learning objective. The points earned for addressing each learning objective can be adjusted to include zero points as well as additional, partial points (e.g., 0, 1.5, 3, 4.5). Instructors can later update the rubrics and the writing curriculum to address common errors that weren’t initially included in the rubrics. While rubrics can make delivering feedback more efficient, instructors can still offer more specific comments/suggestions for edits in addition to using a rubric.
Using rubrics for grading in addition to APA style instructional videos, Obeid and Hill (2018) were able to improve learners’ APA style writing compared to when standard instructional methods were used. Rubrics can also be used to efficiently grade learners’ writing and provide them with the option to submit multiple drafts of an assignment and receive feedback prior to a final grade (Stellmack et al., 2015). Alternatively, rubrics can be used on a sliding scale to provide feedback and encouragement to learners who are struggling (Mahmood & Jacobo, 2019).
While the interobserver agreement for grading with rubrics isn’t as high as it could be (e.g., 0.37 with conservative calculation methods and 0.90 with liberal calculation methods; Stellmack et al., 2009), interobserver agreement (IOA; and intraobserver agreement) could be improved with explicit training to 0.90 IOA mastery criterion on the learning objectives. Behavior analysts have many tools available to help them choose the best IOA measure, set a sufficient mastery criterion level for performance, and to calculate IOA (e.g., see Hausman et al., 2022; Reed & Azulay, 2011). Despite the current low IOA estimates, the advantage of rubrics is that they will eliminate some of the subjectivity involved in grading writing assignments with vague criteria (e.g., 5 points for the literature review in the introduction section and 4 points for the overall flow of the introduction section in an APA style paper).
Adopting rubrics for grading papers can help to standardize the grading process, set clear expectations for learners to follow – especially if they are given the rubric along with the instructions for writing a paper, and provide efficient feedback to learners about their writing. Rubrics are an effective assessment tool that educators can use to grade writing assignments with several learning objectives.
Image Credits:
[1] Image 1 courtesy of RF._.studio under Pexels License
[2] Image 2 courtesy of Tima Miroshnichenko under Pexels License
[3] Image 3 courtesy of Keira Burton under Pexels License
References:
Boysen, G. A., Sawhney, M., Naufel, K. Z., Wood, S., Flora, K., Hill, J., & Scisco, J. L. (2020). Mentorship of undergraduate research experiences: Best practices, learning goals, and an assessment rubric. Scholarship of Teaching and Learning in Psychology, 6, 212-224. http://dx.doi.org/10.1037/stl0000219
Clark, S. K., Judd, E., Smith, L. K., & Ahlstrom, E. (2021). Examining the effects of integrated science and literacy instruction to teach second-graders to write compare and contrast informational text. Early Childhood Education Journal, 49, 567-579. https://doi.org/10.1007/s10643-020-01106-9
Eikenhout, N., & Austin, J. (2005). Using goals, feedback, reinforcement, and a performance matrix to improve customer service in a large department store. Journal of Organizational Behavior Management, 24, 27-62. https://doi.org/10.1300/J075v24n03_02
Gamel, C., van Andel, S. G., de Haan, W. I., & Hafsteinsdóttir, T. B. (2018). Development and testing of an analytic rubric for a master’s course systematic review of the literature: A cross-sectional study. Education for Health, 31, 72-79. https://doi.org/10.4103/efh.EfH_336_17
Hausman, N. L., Javed, N., Bednar, M. K., Guell, M., Schaller, E., Nevill, R. E., & Kahng, S. W. (2022). Interobserver agreement: A preliminary investigation into how much is enough? Journal of Applied Behavior Analysis, 55, 357-368. https://doi.org/10.1002/jaba.811
Hitchcock, J. H., Kratochwill, T. R., & Chezan, L. C. (2015). What Works Clearinghouse standards and generalization of single-case design evidence. Journal of Behavioral Education, 24, 459-469. https://doi.org/10.1007/s10864-015-09224-1
Kind, V. (2019). Development of evidence-based, student-learning-oriented rubrics for pre-service science teachers’ pedagogical content knowledge. International Journal of Science Education, 41, 911-943. https://doi.org/10.1080/09500693.2017.1311049
Maggin, D. M., Briesch, A. M., Chafouleas, S. M., Ferguson, T. D., & Clark, C. (2014). A comparison of rubrics for identifying empirically supported practices with single-case research. Journal of Behavioral Education, 23, 287-311. https://doi.org/10.1007/s10864-013-9187-z
Mahmood, D., & Jacobo, H. (2019). Grading for growth: Using sliding scale rubrics to motivate struggling learners. Interdisciplinary Journal of Problem-Based Learning, 13(2). https://doi.org/10.7771/1541-5015.1844
Obeid, R., & Hill, D. B. (2018). Freely available instructional video and rubric can improve APA style in a research methods classroom. Scholarship of Teaching and Learning in Psychology, 4, 308-314. http://dx.doi.org/10.1037/stl0000123
Reed, D. D., & Azulay, R. L. (2011). A Microsoft ExcelⓇ 2010 based tool for calculating interobserver agreement. Behavior Analysis in Practice, 4, 45-52. https://doi.org/10.1007/BF03391783
Resnick, L. B., Wang, M. C., & Kaplan, J. (1973). Task analysis in curriculum design: A hierarchically sequenced introductory mathematics course. Journal of Applied Behavior Analysis, 6, 679-710. https://doi.org/10.1901/jaba.1973.6-679
Stellmack, M. A., Konheim-Kalkstein, Y. L., Manor, J. E., Massey, A. R., & Schmitz, J. A. P. (2009). An assessment of reliability and validity of a rubric for grading APA-style introductions. Teaching of Psychology, 36, 102-107. https://doi.org/10.1080/00986280902739776
Stellmack, M. A., Sandidge, R. R., Sippl, A. L., & Miller, D. J. (2015). Incentivizing multiple revisions improves student writing without increasing instructor workload. Teaching of Psychology, 42, 293-298. https://doi.org/10.1177/0098628315603060