Abstract: Deeper analysis and explanation of use cases related to VALUE rubric use, modification, calibration training, and scoring. Reiterates the development process but greater emphasis on rubric modifications along with the associated warnings: "As greater modifications in the original VALUE rubrics are made, the more difficult it becomes to place local results in broader contexts" (p. 21). Two forms of validity are discussed - face and content - as well as adoption (e.g. "use"); reliability framed as measure of "approximate agreement" and begins during calibration training, determined by the "two scores around which the majority clusters" (p. 15; p. 23). Emphasis on the importance of calibration training to secure higher levels of interrater reliability, however the "desired levels" are not mentioned (p. 24). Additionally, it is noted that training varies "considerably from campus to campus," of which best practices need to be synthesized and made more transparent as part of future research (p. 24). Lastly, introduces the concept of "a way to gain a sense of how students at one institution are doing in relation to similar students elsewhere," the VALUE-MSC collaboration is mentioned as enabling "the creation of national benchmarks for learning" and "provide a landscape of learning that any institution or state can use to benchmark local performance with relevant perr groups" (p. 43).
Using the VALUE rubrics for improvement of learning and authentic assessment
Rhodes, T., & Finley, A. (2014). Using the VALUE rubrics for improvement of learning and authentic assessment. Peer Review, 3, 32. edsgao. http://proxy-remote.galib.uga.edu/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=edsgao&AN=edsgcl.394333056&site=eds-live