Rubrics on the Fly: Improving Efficiency and Consistency with a Rapid Grading and Feedback System

Main Article Content

Adam C. Moyer
William A. Young II
Gary R. Weckman
Clarence (Red) Martin
Ken Cutright

Abstract

Many learning management systems (LMS) used in higher education provide customizable rubrics that aid in the process of grading and providing feedback for many forms of assessments commonly used by educators today. Rapid Grade is a grading and feedback feature built into a non-commercialized LMS developed by a large, public, Midwestern university in the United States. In this research, Rapid Grade was compared to a grading and feedback system found in one of the most utilized LMS found in higher education. It should be noted that the name of this particular LMS is not named. Using the Technology Acceptance Model to validate that Rapid Grade empirically improves upon existing methods, survey results indicate that Rapid Grade is a significant improvement in terms of ease of use and usefulness when grading and providing feedback for a given assessment. The Rapid Grade framework as well as the specific results of the TAM is presented.

Downloads

Download data is not yet available.

Article Details

How to Cite
Moyer, A. C., Young II, W. A., Weckman, G. R., Martin, C. (Red), & Cutright, K. (2015). Rubrics on the Fly: Improving Efficiency and Consistency with a Rapid Grading and Feedback System. Journal of Teaching and Learning With Technology, 4(2), 6–29. https://doi.org/10.14434/jotlt.v4n2.13473
Section
Articles
Author Biographies

Adam C. Moyer, Ohio University, College of Business

Department of Management Information Systems

William A. Young II, Ohio University, College of Business

Department of Management

Gary R. Weckman, Ohio University, Russ College of Engineering and Technology

Department of Industrial and Systems Engineering

Clarence (Red) Martin, Ohio University, College of Business

Departmeent of Management

References

Teachers.org. (2012, 3 7). RubiStar Home. Retrieved 3 7, 2012, from RubiStar: http://rubistar.4teachers.org/

Abdullah, T., Mateen, A., Sattar, A. R., & Mustafa, T. (2010). Risk Analysis of Various Phases of Software. European Journal of Scientific Research, 40(3), 369-373.

Andrade, H. G. (2000). What Do We Mean by Results? Using Rubrics to Promote Thinking and Learning. Educational Leadership, 57(5), 13-18.

Andrade, H. G. (2005, Winter). Teaching With Rubrics: The Good, the Bad, and the Ugly. College Teaching, 53(1), 27-31.

Anglin, L., Anglin, K., Schumann, P., & Kaliski, A. (2008). Improving the Efficiency and Effectiveness of Grading Through the Use of Computer-Assisted Grading Rubrics. Decision Sciences Journal of Innovative Education, 6(1), 51-73.

Atkinson, D., & Lim, S. (2013). Improving assessment processes in Higher Education: Student and teacher perceptions of the effectiveness of a rubric embedded in a LMS. Australasian Journal of Educational Technology, 29(5), 651-666.

Avison, D. E., & Fitzgerald, G. (2003). Where now for development methodologies? Communications of the ACM, 46(1), 78-82.

Beynon-Davies, P. M. (2000, July). ‘It’s lots of bits of paper and ticks and post-it notes and things . . .’: a case study of a rapid application development project. Information Systems Journal, 10(3), 195-216.

Butler, D. A. (2011). Closing the loop 21st century style: providing feedback on written assessment via MP3 recordings. Journal of Australasian Law Teachers Association, 4 (1&2), 99-107.

CampusComputing. (2013). A Profile of the LMS Market. The national Survey of Computing and Information Technology, 1-35.

Centers for Medicare & Medicaid Services. (2008). Selecting Development Approach. Retrieved from http://www.cms.hhs.gov/SystemLifecycleFramework/Downloads/SelectingDevelopmentApproach.pdf

Cheang, B., Kurnia, A., Lim, A., & Oon, W.-C. (2003). On automated grading of programming assignments is an academic institution. Computers & Education 41, 121-131.

Chen, P. M. (2004, May). An Automated Feedback System for Computer Organization Projects. IEEE Transactions on Education, 47(2), 232-240.

Cooper, B. S., & Gargan, A. (2009). Rubrics in Education Old Term, New Meanings. Phi Delta Kappan, 91(1), 54-55.

Cross, P. K. (1990). Teaching to Improve Learning. Journal on Excellence in College Teaching, 9-22.

Davis, F. D. (1989, September). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, 13(3), 319-340.

Dornisch, M. M., & McLoughlin, A. S. (2006). Limitations of web-based rubric resources: Addressing the challenges. Practical Assessment, Research & Evaluation, 11, 1-8.

Goodrich, H. (1997, January). Teaching for Student Performance. Educational Leadership, 54(4), 14-17.

Hall, R., Butler, L., McGuire, S., McGlynn, S., Lyon, G., Reese, R., & Limbach, P. (2001). Automated, Web-Based, Second-Chance Homework. Journal of Chemical Education, 78(12), 1704.

He, Y., Hui, S. C., & Quan, T. T. (2009). Automatic summary assessment for intelligent tutoring systems. Computers & Education, 53, 890-899.

Howell, R. (2011). Exploring the Impact of Grading Rubrics on Academic Performance: Findings From a Quasi-Experimental, Pre-Post Evaluation. Journal on Excellence in College Teaching, 22(2), 31-49.

Klein, J., & El, L. P. (2010). Impairment of teacher efficiency during extended sessions of test correction. European Journal of Teacher Education, 26(3), 379-392.

Likert, R. (1932). A Technique for the Measurement of Attitudes. Archives of Psychology.

Machado, M., & Tao, E. (2007). Blackboard vs. moodle: Comparing user experience of learning management systems. Frontiers In Education Conference - Global Engineering: Knowledge Without Borders, Opportunities Without Passports, (pp. 7-12). doi:10.1109/FIE.2007.4417910

Mackay, H., Carne, C., Beynon-Davies, P., & Tudhope, D. (2000, October). Reconfiguring the User: Using Rapid Application Development. Social Studies of Science, 30(5), 737-757.

Meyers, L. S., Gamst, G. C., & Guarino, J. A. (2005). Applied Multivariate Research: Design. Sage Publications, Inc.

Miser, H., & Quade, E. (1988). Handbook of Systems Analysis: Craft Issues and Procedural Choices. USA: Wiley.

Necco, C. R., Gordon, C. L., & Tsai, N. W. (1987, December). Systems Analysis and Design: Current Practices. MIS Quarterly, 11(4), 461-476.

Ramey, S., VandeVusse, L., & Gosline, M. (2007). Using a Written Communication Rubric to Improve Students' Writing. International Journal of Learning, 13(10), 67-74.

Riddle, E., & Smith, M. (2008). Developing and Using Rubrics in Quantitative Business Courses. The Coastal Business Journal, 7(1), 82-95.

Spinellis, D., Zaharias, P., & Vrechopoulos, A. (2007). Coping With Plagiarism and Grading Load: Randomized Programming Assignments and Reflective Grading. Computer Applications in Engineering Education, 15(2), 113-123.

TeacherPlanet.com. (2012, 3 7). Rubrics for Teachers. Retrieved 3 7, 2012, from Teacher Planet: http://www.rubrics4teachers.com/

TeAch-nology. (2012, 3 7). Rubrics and Rubric Makers. Retrieved 3 7, 2012, from teAchnology: http://www.teach-nology.com/web_tools/rubrics/

Thompson, M., & Ahn, B. (2012). The Development of an Online Grading System for Distributed Grading in a Large First Year Project-Based Design Course. American Society for Engineering Education, 2012-3467.

US Department of Justice. (2012, 3 7). DOJ Systems Development Life Cycle Guidance Chapter 1. Retrieved 3 7, 2012, from US Department of Justice: http://www.justice.gov/jmd/irm/lifecycle/ch1.htm

Wolf, K., & Stevens, E. (2007). The Role of Rubrics in Advancing and Assessing Student Learning. The Journal of Effective Teaching, 7(1), 3-14.