June 1998 // Commentary
Does Using Technology in Instruction Enhance Learning?
by Ed Neal
Note: This article was originally published in The Technology Source (http://ts.mivu.org/) as: Ed Neal "Does Using Technology in Instruction Enhance Learning?" The Technology Source, June 1998. Available online at http://ts.mivu.org/default.asp?show=article&id=1034. The article is reprinted here with permission of the publisher.

Jerald Schutte, a Sociology Professor at CSU-Northridge, has attracted attention for his 1996 study on the use of technology in instruction. Schutte reported that students in his virtual class performed 20% better than students in his traditional class. The results of the study were widely disseminated on listservs and in the popular press (including a piece in the Chronicle of Higher Education, Feb. 21, 1997, A23). His article, "Virtual Teaching in Higher Education: The New Intellectual Superhighway or Just Another Traffic Jam?" is available online at http://www.csun.edu/sociology/virexp.htm.

Unfortunately, Schutte's research design and methodology are so flawed that the results of the study are uninterpretable. His fundamental error arises from confusion between teaching methods and delivery systems, an error that is common to many studies in this field.

The Study

Description. Students enrolling in Schutte's course in Social Statistics (n=40) were randomly assigned to a "traditional" class or a "virtual" class in roughly equal numbers. Students in the traditional class (where he presumably lectured) met every Saturday during the 14-week semester and submitted weekly problem assignments. Here is Schutte's description of the virtual class:

The virtual class had four assignments each week: 1) e-mail collaboration among randomly assigned groups of three students who generated weekly statistical reports and e-mailed them to the instructor; 2) hypernews discussion in which each student responded to a weekly discussion topic; 3) forms input via the WWW that allowed students to solve the same homework problems as those being solved by the traditional class; and 4) a weekly moderated Internet relay chat (mIRC) in which student discussion and dialogue were carried out in real time in the virtual presence of the professor.

At the end of the semester, the virtual students scored an average of 20 points higher than the traditional students on the 100-point midterm and final exams, an increase that Schutte attributes to virtual interaction:

This experiment was intended to assess the merits of a traditional, versus virtual, classroom environment on student test performance and student affect toward the experience. It was hypothesized that face-to-face professor-student interaction is crucial to test performance. However, the data indicate the reverse, that virtual interaction produces better results (emphasis added).

Schutte suggests that perhaps the reason the students in the virtual class performed better is because they compensated for the lack of face-to-face contact with the professor by forming study groups to "pick up the slack of not having a real classroom" (He offers no documentation about these study groups, so his evidence is apparently anecdotal). Unstated and seemingly not factored in is that his study compares students working in groups with students not working in groups, regardless of whether they were in self-created study groups or in virtual discussion groups.

Weaknesses in the study. Actually, students in the virtual class experienced a completely different method of teaching from those in the traditional class. Not only did they have more opportunities to be involved with each other and with the teacher, but (very significantly) they were intensively engaged with the course material over the entire week. It seems clear that Schutte would have had to provide similar small-group activities, discussion opportunities, and other assignments for the traditional class if he wanted to test the comparative effectiveness of virtual instruction with traditional instruction. Schutte's study simply demonstrates that particular teaching methods (e.g., cooperative learning and exercises that insure more time on task with the material) yield improved performance, results that educational researchers established years ago. (See, for example, research summaries in Johnson & Johnson, 1989; Menges, Weimer, & Associates, 1996; Nilson, L. B. 1998).

There are also a number of methodological weaknesses in this study. Although Schutte did administer a pre-test to measure the demographic similarity of the two groups, he did not test them for statistical knowledge or aptitude (only "statistical feelings" and "math feelings" on a scale from one to ten). Schutte used random assignment of students to try to insure that pre-existing differences in statistical expertise would be equally distributed. However, given the small size of the class, reliance on random assignment could be ill-advised. If the virtual group did have more expertise or aptitude, one might expect them to out-perform the traditional class by more than 20%. Furthermore, Schutte provides only scant information about the course tests and does not describe the grading procedure at all. He does say that both mid-term and final exams had four parts: matching, objective, definitions, and problems. It would be important to know if the tests were "blind graded," across students as well as across the two groups, especially since portions of the tests require subjective judgment on the part of the grader (definitions and problems). An effective research design would include pre-test and post-test measures and selection of significance levels prior to collecting the data.

Value of the study. The Schutte study violates so many of the rules of empirical research that we cannot use it to draw any conclusions about the comparative effectiveness of "traditional" versus "virtual" teaching. Can we learn anything useful about teaching with technology from his work? One could argue that, without technology, Schutte would not have experimented with other teaching methods. We have all heard anecdotes about professors who, in trying to incorporate technology into their courses, adopt better instructional approaches as a consequence. However, we do not know how many teachers benefit in this way nor whether students in their classes actually learn more material or master more skills. This argument also fails to address the question of whether we need the technology at all if we can be more effective by simply adopting these teaching methods in the traditional classroom–and it would certainly be cheaper.

Alternative Outcomes

Does Schutte's study demonstrate that virtual learning is well-suited for cooperative learning in a virtual environment? The answer to this question depends on one's definition of cooperative learning. At its simplest level, cooperative learning is "a structured, systematic instructional strategy in which small groups work together toward a common goal" (Cooper and Mueck, 1990, p. 68), but authorities on the subject insist that effective cooperative learning requires more than simply putting students into small groups and giving them a task to perform. For example, there must be "positive interdependence" among group members, which means that each member's success must depend to some degree on the group's success. "Individual accountability" is also necessary, which means that all members of a group are accountable for their own learning as well as for the other students in the group, and group members should share the workload equally. The group must also be aware of (and consciously work on) collaborative social skills such as conflict management and giving constructive feedback. The social aspects of cooperative learning groups, which are crucial to their effectiveness, are difficult (if not impossible) to replicate in a virtual environment. Schutte obviously did not set out to create cooperative learning groups by this definition, but his study does show that some aspects of student collaboration can be accomplished in virtual space. However, even this conclusion is provisional, and it begs the question of how much more effective the groups might have been if they had been "real" instead of "virtual."

Finally, one might look at Schutte's project in terms of the effectiveness of "total instructional redesign," since his methods in the virtual class were so different from his methods in the traditional class. Only if we accept the results of his study as valid (which, for the reasons given above, we cannot do), could his "redesign" be called effective. However, we cannot ignore the enormous costs of the technology in this equation. If he had used these methods in his traditional class, costs would not have increased, but because he and his students needed the networked technology of a major educational institution, they incurred the extremely high costs of technology. One could argue that traditional instruction requires buildings, desks, heat, light, and other resources that also cost enormous amounts. However, buildings do not need to be rebuilt every five years and desks do not become obsolete because someone has upgraded the design, nor do traditional classrooms require a platoon of highly-paid experts to stay online.

Other Comparative Research

Schutte's study is flawed, but many of the other research studies in this area are also weak. Thomas Russell’s survey of research on what he characterizes as "The ‘No Significant Difference’ Phenomenon" is often used to support the argument that it does not matter what delivery system is used, since there is no difference in how students perform. But the studies contained in Russell's list suffer from many shortcomings: the research designs are poorly conceived, the statistical analysis is weak or absent, and/or the sample size is too small. Many of the studies do not try to measure learning outcomes at all, but focus instead on attitudinal outcomes–how the students felt about the experience rather than what they learned. The studies that do try to assess student learning as an outcome variable often use tests that measure simple recall of information rather than mastery of higher-order learning. In these cases, it is not surprising that there is "no significant difference" in performance because it really does not matter to students how they acquire factual information.

Conclusion

Educational researchers face many difficulties in trying to conduct controlled studies in university settings, because threats to validity and reliability are often beyond the influence of the investigator. As a result, a number of people who are interested in the application of technology to teaching have abandoned traditional research models, opting instead for "proof in practice." The Flashlight Project of the American Association for Higher Education is an example of this approach (a description of the project is available on the AAHE web site). This strategy can work, and might ultimately identify successful applications of technology in teaching, as long as evaluations of the results are based on learning outcomes. Unfortunately, most of these projects repeat the same error that Schutte committed, confusing teaching processes with instructional outcomes.

For example, the Flashlight Project seeks to discover whether faculty and students find the available technology useful (or a hindrance) when they try to implement Chickering & Gamson's (1987) principles of good practice in undergraduate education:

  1. Interaction between the student and teacher (or tutor, or other expert);
  2. Student-student interaction;
  3. Active learning;
  4. Time on task;
  5. Rich, rapid feedback;
  6. High expectations of the student's ability to learn; and
  7. Respect for different talents, ways of learning.

Chickering and Gamson's justification for using this approach reveals an interesting, if flawed, line of reasoning: "Because so much research indicates that these practices support better learning, it would be significant to discover that they were being implemented and that technology was playing an important role." In short, if we find that these secondary indicators are present in technology-based instruction, we should assume that learning occurred. Most teachers know that any teaching method can be performed well or poorly, so the presence of (for example) active learning techniques will tell us nothing about the quality of the learning outcomes. Nor will this project tell us what the students learned (facts, ability to apply knowledge, critical thinking skills) or whether they could have learned better or faster without the technology. The evaluation does not address cost-benefit issues with respect to learning outcomes, so the question of comparative costs of delivering instruction with and without technology will go unanswered. These are the kinds of questions that are essential for evaluating any educational program, but especially programs that place increased demands on the human and financial resources of higher education.

In the 1960s, instructional television promised to change teaching and learning dramatically. College administrators and state legislators, hoping to "expand educational opportunity" (and ultimately save money) by using this electronic delivery system, invested millions in closed-circuit systems, TV production facilities, educational television stations, and even airborne broadcasting systems. Proponents of ITV in the '60s predicted that "as much as 50 percent of the college degree program will be available for credit via television" in the future (Murphey & Gross, 1966, pp. 83-95). Instructional television failed to achieve the transformation of higher education, but the 1960s were boom years for education and the grand experiment with ITV (wasteful as it was) occurred in a time of flush resources. Today, higher education operates in a much tighter fiscal environment and we cannot afford to make many mistakes. We therefore need better studies, focused on significant questions, to guide us in developing appropriate and cost-effective applications of the new technology. To make educational policy decisions or base large investments on anything less is foolhardy.

References

Chickering, A. & Gamson, Z. (Mar. 1987). Seven principles of good practice in undergraduate education. AAHE Bulletin.

Cooper, J. & Mueck, R. (1990). Student involvement in learning: Cooperative learning and college instruction. Journal on Excellence in College Teaching, 1, 68-76.

Johnson, D. W. & Johnson, R. T. (1989). Cooperation and competition: Theory and research. Edina, MN: Interaction.

Menges, R. J., M. Weimer, & Associates. (1996). Teaching on solid ground: Using scholarship to improve practice. San Francisco: Jossey-Bass.

Murphey, J., & Gross, R. (1966). Learning by Television. New York: The Fund for the Advancement of Education.

Nilson, L. B. (1998). Teaching at its best: A research-based resource for college instructors. Bolton, MA: Anker.

puzzle gamesword gamesplatform gamespc game downloadspc gameshidden objects gamesmatch 3 gamesbest pc gamesbrick buster
View Related Articles >