Computer-based quizzes are useful to instructors who wish to provide their students with opportunities for frequent testing and feedback. They eliminate the logistical problems with paper-and-pencil tests, especially in courses with large enrollments. However, one concern with computer software involves scalability—how many users the system can handle effectively. Johnson (2002) recently described hardware issues related to this topic.
We have used a popular Internet courseware package in our general and human development psychology courses at the University of Minnesota (UM). In this article, we recount the pedagogical benefits and challenges that we experienced with this resource, and we offer practical recommendations for instructors preparing to implement it within a high-demand environment.
For several years we have taught our psychology courses with the Personalized System of Instruction (PSI; Keller, 1968), a highly researched version of the mastery learning teaching method. Students in PSI work at their own pace to read a textbook with direction from a study guide and, when they are ready, take chapter quizzes; after they master one chapter, they move on to the next. Several reviews and meta-analyses (Keller, 1974; Kulik, Kulik, & Bangert-Drowns, 1990; Kulik, Kulik, & Cohen, 1979; Robin, 1976; Ryan, 1974) have found superior student learning in PSI compared to traditional lecture/discussion methods. For example, in the 26 college psychology courses analyzed by Kulik, Kulik, and Bangert-Drowns, the effect size in favor of PSI over traditional instruction was 0.71. This means that the average PSI student demonstrated nearly three fourths of a standard deviation greater learning than the average student in an equivalent, traditionally taught psychology class.
PSI is especially appropriate for our general psychology and human development courses (Brothen & Wambach, 2000), in which students are expected to master a large set of concepts, theories, and facts. The PSI method emphasizes repeated testing as a means of providing feedback. This pedagogical approach led us to develop computer-based quizzes that deliver feedback quickly and allow many opportunities for retesting.
We team-teach our courses in a 40-station, networked computer classroom. Students in the six sections of our PSI-based general psychology course, which meets four days each week for one hour, work both outside and in class on computerized exercises for each of 18 textbook chapters. When they achieve mastery scores on the exercises and feel sufficiently prepared, students take proctored chapter quizzes at eight additional testing computers located at the back of the classroom. They can do exercises as many times as they like and can take chapter quizzes three times each. The last course activity is a proctored final exam that they take only once. Our approach is much the same in the human development course we teach one evening per week: Students take nineteen 10-item, multiple-choice chapter quizzes as many times as they like to prepare for four proctored, 40-item multiple-choice unit exams, each of which they take only once. Students read the text and take quizzes largely on their own. They interact with us through weekly one-on-one sessions in which we help each student complete his or her research papers. For all exercises and quizzes in both courses, students' highest scores count toward their grades. The opportunity to improve a specific score is an inducement to restudy and retake a quiz.
Initially, we created and used an MS-DOS-based, networked course delivery system (Brothen, 1995) that provided fill-in-the-blank exercises and multiple-choice quizzes, gave students feedback on their progress, and recorded student scores in individual log files for grading and research purposes. With the advent of the Pentium III computer chip, our Turbo Pascal-based course software became obsolete. We had been looking for a way to deliver courses via the Internet, and new resources made available at the University level provided an opportunity to do that.
WebCT Quizzes: Promising Technology, Practical Problems
Like many other educational institutions, the University of Minnesota made a commitment to Web-based instruction by adopting WebCT course software (for a description and comparison of WebCT with other courseware packages, see the Edutools site created by the Western Cooperative for Educational Telecommunications). In fall 2000, the University began a changeover from WebCT version 1.2 to version 3.0. This was fortuitous for us because the version 3.0 quizzing function (in WebCT terminology, the term "quizzes" covers all exercises, quizzes, and exams) had improved features that made its delivery of our courses feasible. We were also happy to offer students the opportunity to do online coursework outside the classroom. Our courses were the only ones on the version 3.0 server during the fall term as the computer center worked out implementation problems associated with the upgrade. We taught seven 40-student sections of our general psychology class and two sections of our human development class that semester. These courses went fairly smoothly after the bugs were worked out of the system, but we did notice some slow response times when the classroom was full and many students were working. That experience did not prepare us for the spring semester, when several dozen more courses joined us on the version 3.0 server.
The success of the PSI model depends, in part, on students getting prompt feedback on their progress (Wambach, Brothen, & Dikel, 2000) and being able to continue when they have mastered a block of material. If we were interested only in whether students knew the material, we could give paper-and-pencil quizzes. We use computerized quizzing because immediate feedback is an integral component of our teaching model. For each chapter, our general psychology students must progress from a fill-in-the-blank exercise to a multiple-choice practice exercise, mastering both at the 70% level before moving on to the multiple-choice chapter quiz (Figure 1, Figure 2). Immediately after they finish each exercise and chapter quiz, students see their scores and receive feedback on material for further study. If students do not get this information within a reasonable time, our course design breaks down.
In spring 2000, we found that when 20 or more general psychology students were working online in our classroom, the system slowed to such an extent as to become nonfunctional. At times during the day, student work was not scoring at all, with students often waiting up to 20 minutes before giving up and leaving. This problem was particularly bad in the late morning because a chemistry professor using WebCT required his 300 students to complete a one-item quiz each day before his noon lecture—and many students procrastinated until just before class. At the end of the semester, when our own procrastinators were crowding into the classroom to finish their work, quizzes would not score. Students were unable not only to get feedback on their quizzes, but also to move through the sequence of chapter activities, which only heightened their regular anxieties.
Despite these problems, we were optimistic about another WebCT upgrade that was implemented in fall 2001. Version 3.6 provided a load-balancing feature whereby more than one server could be used to spread the load demand of the nearly 1,000 UM courses scheduled for moving to the system. By early in the fall semester, three huge Sun servers had been installed, but the same difficulties remained. The system was so slow during peak demand times that our students began their own "load-balancing" by doing their exercises at night, outside of class. This was not a perfect solution because the proctored quizzes required classroom attendance, and students could not get immediate help from us if they had questions about other exercises. We got sympathy but no solution from the university technical staff, who concluded that no matter what they did, WebCT would not be able to handle the demands of our course structure. The software's quizzing function apparently takes so much processor time that even the computer center's impressive bank of computers could not meet our needs.
The students in our evening human development course fared much better. Although they sometimes took chapter quizzes in the classroom, they spread their work over the week during both day and night. We had problems, however, with the unit exams scoring quickly. We often had to use the WebCT instructor grading function to score exams after students had grown tired of waiting and left. Other than students not seeing their scores immediately, this problem did not interfere greatly with our course model. Occasionally students complained that chapter quizzes had not been graded, but this was typically when they completed the quizzes during the high-demand (9 a.m.-4 p.m.) time slot during weekdays. WebCT met our needs in the human development course because, for the most part, students could easily avoid high-demand times.
Suggestions For Users
Based on our experience with the WebCT quiz tool, we offer the following suggestions to other potential users.
First, instructors considering the use of WebCT quizzes in computer classroom settings should not plan to deliver immediate results and feedback. The most likely scenario, based on our experience, is that students will take the quiz (or exam), submit it, and have to look later to find out how they did. The preferable course of action is to provide quizzes that require quick scoring and deliver feedback during times when server loads are not high.
Second, instructors should avoid using the selective release feature to spread the server load. Managing traffic by specifying when subsets of students can take quizzes adds steps and slows the system even more.
Third, instructors should be especially careful with deadlines. A deadline during high-demand times puts the inevitable procrastinators in competition with other sources of server traffic and often results in ungraded quizzes, unhappy students, and problems for other users. An additional downside is that instructors often have to search through the WebCT records to find ungraded quizzes and do the scoring themselves.
Fourth, instructors should carefully consider putting time limits on quizzes. Without time limits, students can search their books and notes until they find the correct answers or print a quiz and close the browser. Closing the browser (or severing the Internet connection) means that the next time the students log in, they receive the exact same quiz. This defeats the purpose of a "one-attempt" limit or random selection of quiz items from a question pool because students can find the answers at their leisure, reload the same quiz later, and then enter their responses. If the instructor's goal is to give students feedback on what they know, time limits on quizzes are necessary. A word of caution is required, however: Time limits will ultimately increase network traffic because students will need more attempts to reach mastery and attain maximum points.
Most of the problems described above will be magnified in large institutions with large-enrollment classes. Small schools may not be immune to such complications, however. Our human development course meets early in the evening, when overall campus use is relatively low, and 25 students taking a 40-item exam stresses the system.
A new database version of WebCT is now available. While database software is more efficient at handling large numbers of users than a data file application such as version 3.X, there is reason to believe that the problems we have encountered will not disappear completely. Database programs are not immune to bottlenecking. Also, given the large number of current users across the country, version 3.X seems likely to be in operation for some time. In any case, we advise instructors using WebCT quizzes to assign them in a way that encourages students to use them as assessment and feedback devices outside of class on a fluid schedule. This approach will help minimize frustration for both students and faculty members.
Brothen, T. (1995). Using a new text-based authoring system to create a computer-assisted introductory psychology course. In T. Sechrest, M. Thomas, & N. Estes. Leadership for creating educational change: Integrating the power of technology, Vol. 1 (pp. 308-310). Austin: University of Texas. 12th International Conference on Technology and Education.
Brothen, T., & Wambach, C. (2000). A research based approach to developing a computer-assisted course for developmental students. In J. L. Higbee & P. L. Dwinell (Eds.), The many faces of developmental education (pp. 59-72). Warrensburg, MO: National Association for Developmental Education.
Johnson, D. F. (2002). Using a course management system for large classes: Support, infrastructure, and policy issues. The Technology Source. Retrieved January 31, 2003, from http://technologysource.org/?view=article&id=386
Keller, F. (1968). "Goodbye, teacher..." Journal of Applied Behavior Analysis, 1, 79-89.
Keller, F. (1974). Ten years of personalized instruction. Teaching of Psychology, 1, 4-9.
Kulik, C., Kulik, J., & Bangert-Drowns, R. (1990). Effectiveness of mastery learning programs: A meta-analysis. Review of Educational Research, 60, 265-299.
Kulik, C., Kulik, J., & Cohen, P. (1979). A meta-analysis of outcome studies of Keller's Personalized System of Instruction. American Psychologist, 34, 307-318.
Robin, A. (1976). Behavioral instruction in the college classroom. Review of Educational Research, 46, 313-354.
Ryan, B. (1974). PSI, Keller's Personalized System of Instruction. Washington DC: American Psychological Association.
Wambach, C., Brothen, T., & Dikel, T. N. (2000). Toward a developmental theory for developmental educators. Journal of Developmental Education, 24(1), 2-4, 6, 8, 10, 29.card gamesadventure gameshidden objects gamesmatch 3 gameskids gamesaction gamesmahjongdownloadable pc gamespuzzle games