Contrasting Traditional In-Class Exams with Frequent Online Testing
Main Article Content
Abstract
Although there are clear practical benefits to using online exams compared to in-class exams (e.g., reduced cost, increased scalability, flexible scheduling), the results of previous studies provide mixed evidence for the effectiveness of online testing. This uncertainty may discourage instructors from using online testing. To further investigate the effectiveness of online exams in a naturalistic situation, we compared student learning outcomes associated with traditional in-class exams compared to frequent online exams. Online exams were administered more frequently in an attempt to mitigate potential negative effects associated with open-book testing. All students completed in-class and online exams with order of testing condition (in-class first, or online first) counterbalanced between students. We found no difference in long-term retention for material that had originally been tested using frequent online or traditional in-class exams and no difference in self-reported study time. Overall, our results suggest that frequent online assessments do not harm student learning in comparison to traditional in-class exams and may impart positive subjective outcomes for students.
Downloads
Article Details
- Authors retain copyright and grant the Journal of Teaching and Learning with Technology (JoTLT) right of first publication with the work simultaneously licensed under a Creative Commons Attribution License, (CC-BY) 4.0 International, allowing others to share the work with proper acknowledgement and citation of the work's authorship and initial publication in JoTLT.
- Authors are able to enter separate, additional contractual agreements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in JoTLT.
- In pursuit of manuscripts of the highest quality, multiple opportunities for mentoring, and greater reach and citation of JoTLT publications, JoTLT encourages authors to share their drafts to seek feedback from relevant communities unless the manuscript is already under review or in the publication queue after being accepted. In other words, to be eligible for publication in JoTLT, manuscripts should not be shared publicly (e.g., online), while under review (after being initially submitted, or after being revised and resubmitted for reconsideration), or upon notice of acceptance and before publication. Once published, authors are strongly encouraged to share the published version widely, with an acknowledgement of its initial publication in JoTLT.
References
Agarwal, P. K. & Roediger, H. L. (2011). Expectancy of an open-book test decreases performance on a delayed closed-book test. Memory, 19, 836-852. doi: 10.1080/09658211.2011.613840
Alexander, M. W., Bartlett, J. E., Truell, A. D., & Ouwenga, K. (2001). Testing in a computer technology course: An investigation of equivalency in performance between online and paper and pencil methods. Journal of Career and Technical Education, 18(1), 69-80. Retrieved from http://scholar.lib.vt.edu/ejournals/JCTE/v18n1/alexander.html
Bacdayan, P. (2004). Comparison of management faculty perspectives on quizzing and its alternatives. Journal of Education for Business, 80, 5-9. doi: 10.3200/JOEB.80.1.5-9
Bangert-Drowns, R. L., Kulik, J. A. & Kulik, C. C. (1991). Effects of frequent classroom testing. Journal of Educational Research, 85, 89-99.
Brewster, J. (1996). Teaching abnormal psychology in a multimedia classroom. Teaching of Psychology, 23, 249-252.
Brothen, T. & Wambach, C. (2001). Effective student use of computerized quizzes. Teaching of Psychology, 28, 292-294. doi: 10.1207/S15328023TOP2804_10
Chan, J. C. K. (2010). Long-term effects of testing on the recall of nontested materials. Memory, 18, 49-57. doi: 10.1080/09658210903405737
Chan, J. C. K., McDermott, K. B., & Roediger, H. L. (2006). Retrieval-induced facilitation: Initially non-tested material can benefit from prior testing of related material. Journal of Experimental Psychology: General, 135, 553-571. doi: 10.1037/0096-3445.135.4.553
Daniel, D. B. & Broida, J. (2004). Using web-based quizzing to improve exam performance: lessons learned. Teaching of Psychology, 31, 207-208. doi: 10.1207/s15328023top3103_6
Landrum, R. E. (2007). Introductory psychology student performance: Weekly quizzes followed by a cumulative final exam. Teaching of Psychology, 34, 177-180. doi: 10.1080/00986280701498566
Maki, W. S. & Maki, R. H. (2001). Mastery quizzes on the web: Results from a web-based introductory psychology course. Behavior Research Methods, Instruments, & Computers, 33, 212216. doi: 10.3758/BF03195367
McDaniel, M. A., Anderson, J. L., Derbish, M. H., & Morrisette, N. (2007). Testing the testing effect in the classroom. European Journal of Cognitive Psychology, 19, 494-513. doi: 10.1080/09541440701326154
McDaniel, M. A., Agarwal, P. K., Huelser, B. J., McDermott, K. B., & Roediger, H. L., (2011). Test-enhanced learning in a middle school science classroom: The effects of quiz frequency and placement. Journal of Educational Psychology, 103, 399-414. doi: 10.1037/a0021782
Myers, D. G. (2009). Psychology in everyday life. New York, NY: Worth.
Roediger, H. L. & Karpicke, J. D. (2006). The power of testing memory: Basic research and implications for educational practice. Perspectives in Psychological Science, 1, 181-210. doi: 10.1111/j.1745-6916.2006.00012.x
Rohrer, D. & Pashler, H. (2007). Increasing retention without increasing study time. Current Directions in Psychological Science, 16, 183-186. doi: 10.1111/j.1467-8721.2007.00500.x State Higher Education Executive Officers. (2012).
State higher education finance report FY 2012. Retrieved from http://www.sheeo.org/sites/default/files/publications/SHEF%20FY%201220130322rev.pdf
Stowell, J. R., & Bennett, D. (2010). Effects of online testing on student exam performance and test anxiety. Journal of Educational Computing Research, 42(2), 161-171. doi: 10.2190/EC.42.2.b
Taraban, R., Maki, W. S. & Rynearson, K. (1999). Measuring study time distributions: Implications for designing computer-based courses. Behavior Research Methods, Instruments & Computer, 31(2), 263-269. doi: 10.3758/BF03207718