September 1998 // Letters to the Editor
Does Using Technology in Instruction Enhance Learning?
A Reply to Ed Neal
by Jerald Schutte
Note: This article was originally published in The Technology Source (http://ts.mivu.org/) as: Jerald Schutte "Does Using Technology in Instruction Enhance Learning?
A Reply to Ed Neal" The Technology Source, September 1998. Available online at http://ts.mivu.org/default.asp?show=article&id=1034. The article is reprinted here with permission of the publisher.

In the June 1998 issue of the Technology Source, Ed Neal, director of faculty and staff development at the University of North Carolina, reviewed my study concerning the effects of virtual versus classroom instruction (Schutte, 1997). Neal concluded that "... Schutte's research design and methodology are so flawed that the results of the study are uninterpretable. His fundamental error arises from confusion between teaching methods and delivery system." I believe Neal has missed the point of my study and, therefore, I feel the need to respond to his major criticisms, organized pursuant to his outline, as follows:

The Study

Neal has incorrectly characterized the premise of my experiment, which was to test the effects of different learning venues, not different styles of teaching. In arguing that students in the virtual classroom experienced life as a collaborative group, whereas students in the real time classroom condition did not, he has simply misconstrued the definition of a group. Indeed, if any condition would qualify under the sociological definition of group, it would be the 17 students who were in a real time classroom engaged in face-to-face interaction on a regular basis with common activities and common goals, not 17 geographically dispersed individuals who engage in dialogue and collaboration only through a computer.

Neal has incorrectly identified my study as an experimental test of virtual group collaboration and, essentially, is mistaking the dependent variable for an independent variable. The virtual condition was not, itself, the group's dynamic; rather, the virtual condition resulted in a particular group dynamic that led to better test scores. Indeed, the virtual students were initially much more "isolated" than were the classroom students: they did not know their fellow students, could not speak with them or with the professor, and had to cope with a new computer system that crashed 14 times during the first four weeks. Only later in the semester did students in the virtual classroom begin to interact.

If Neal were correct in his assumption that it is the group dynamic rather than the technology that accounts for testing differences, then clearly the performance on the virtual students' midterm exam (7th week) would not have been nearly as strong compared to their performance on the final exam (14th week) and perhaps would have yielded reverse findings, compared to the classroom-based class, since no "groups" had formed in the virtual condition by the time of the midterm. Such was not the case. The midterm results were as pronounced and significant as the final results.

Weaknesses in the Study

Neal also argues that in my study "the virtual class experienced a completely different method of teaching" and "there are ... a number of methodological weaknesses." With regard to this first statement, his premise is argued by two points: (1) the virtual students had more opportunity to be involved with each other and with the teacher; and (2) they were intensively engaged with the course material over the entire week.

He is incorrect. Students who spend five hours per week in face-to-face conversations with their professor and peers (the classroom) have more opportunity to engage each other than students who "meet" only online. If Neal does not subscribe to this logic, then he must agree that the computer is superior to face-to-face interaction as a communication tool. He does not. In fact, later in his review, he claims the reverse.

With respect to the second argument, all students in each condition were required to read the same textbook material and work the same homework problems, were given the same time frame to submit their work, and took the same tests at the same time and in the same location. The IRC chat discussion for the virtual students and the question and answer period in the classroom condition were identical in duration. The e-mail assignments for the virtual class were identical to the lab sessions for the classroom students (i.e., students in both groups were assigned the same collaborative tasks). The hypernews postings for virtual students were identical to the classroom stimulus questions posed in realtime for the in-class group. The Web lectures were simply the written versions of my classroom lectures. Beyond these activities, no group assignments were given or encouraged in either condition. What informal groups developed from these conditions were strictly a function of the students' reaction to the condition, not the condition itself, a fact which most critics confuse.

The second weakness Neal notes focuses on my methodology. He argues: (1) no pre-test was conducted as to "statistical knowledge and aptitude" of the students; (2) random assignment was instead utilized to compensate for this deficiency; (3) the testing instruments were not adequately described, and in particular I did not mention blind reading of the exams; and (4) the study should have included pre- and post-tests with significance levels chosen prior to collecting the data.

With respect to point one, I reported the overall GPA of students (a common measure to track the accumulation of knowledge) in both groups. I found no significant difference. Moreover, I also tested differences in mean GPA for the course prerequisite, a lower-division statistics course. There was no significant difference there, either. Because this finding mirrored the overall GPA (a 0.80+ correlation between them), I simply reported the overall GPA difference and not both.

With respect to point two, even without my two-tiered pre-test of prior statistical knowledge and aptitude, random assignment of subjects to conditions suffices to control for such exogenous variables. This is the premise behind small-sample statistics.

With respect to point three, Neal concludes that sins of omission are sins of commission. Because I did not choose to patronize my readers with obvious statements, this does not mean I did not engage in proper procedure. All student exams in both groups received blind readings. The significant differences held across several question types (multiple choice, definitions, problem solving). Since the objective questions, by definition, have no inter-rater reliability issues, showing that they yielded the same results as the problem solving section further supports the blind reading assertion.

With respect to point four, reporting the actual probability levels does not preclude choosing a significant level. However, since this is a preliminary study, I chose to report the actual probabilities so that readers could see the trends in the data, not the arbitrary dichotomy of "significant-not significant" criteria often chosen by researchers for deciding the worthiness of their initial efforts.

Value of the Study

Neal asks if we can learn anything from my study and argues that, even though it might imply a motivation for experimenting with other teaching methods, it fails to address whether or not the classroom functions just as well as virtual learning in innovation, while being the cheaper process. If negating significant results for any one research hypothesis required only that there be other plausible rival research hypotheses, no one would ever initiate research, as any significant finding could have other plausible explanations. Yet pioneering studies are continually initiated. That is the cumulative nature of research.

Moreover, even granting that the classroom could also be a place for innovation, it is certainly not cheaper than its virtual equivalent. The fallacy in Neal's argument results from his identifying the costs he chooses to compare, arguing that the useful life of buildings and desks is far greater than the useful life of virtual technology; hence, the lower costs of classroom teaching.

Overlooking his assumption that campuses are already built (California State University has started two new campuses in the past three years), and ignoring the fact that some schools have more costs than simple maintenance and upkeep (FEMA has had to fund to CSU-Northridge, for example, for over $350 million in costs of repairs to its physical plant from the 1994 Northridge earthquake), most campuses spend far more on library personnel, plant and equipment maintenance, acquisitions of material, and general and administrative expenses than all the computer personnel, equipment, administration, and upgrades on campus combined. At CSUN, for example, the figure is almost three times as much.

Not only does Neal's comparison exclude the direct costs of construction (whatever the depreciation schedule used), it also excludes the indirect student and faculty costs of commuting to campus, paying for child care, and absorbing parking fees, as well as the staffing costs of redundant evening and weekend course offerings to accommodate a working student population. Perhaps UNC is a more traditional university, but the demographics of American higher education clearly point to an increasingly older, employed full-time, commuting student who does not matriculate through an undergraduate degree program in four years. Many students do not have the luxury of asking, "Can I learn better in a campus or a virtual class?" but rather must ask, "Can I attend the university at all if it must be on campus?" This has nothing to do with measuring learning outcomes. It is purely and simply logistics, a component rarely identified in the comparison of virtual versus classroom-based teaching.

Alternative Outcomes

Neal admits that my study may "demonstrate that virtual learning is well-suited for cooperative learning and that some aspects of student collaboration can be accomplished in virtual space." But that is moot, he argues, because it "begs the question of how much more effective the groups might have been if they had been 'real' instead of 'virtual'."

The number one residual finding of my study is that students placed in a virtual environment, performing the same work as classroom-based students, are more likely to engage others for help and understanding (not to be confused with collaboration or group tasking, as is done in Neal's review) than are students placed in a face-to-face environment where constant contact is the norm. That this occurred is prima facie evidence that, all other things equal, students in virtual groups interact more than students in in-class groups.

This finding also provided the premise for further experimentation I completed in the spring of 1997, in which I tested the hypothesis that students in virtual groups collaborate more than students in in-class groups. In four separate segments of a research methods class, students were exposed to: (1) classroom instruction with no collaboration, (2) virtual instruction with no collaboration, (3) classroom instruction with collaboration, and (4) virtual instruction with collaboration. In both cases, virtual instruction yielded higher test scores than did the corresponding classroom instruction. However, the highest and most significant difference from all other conditions was the interaction of virtual instruction and collaboration. It would appear that using computer-mediated communication accelerates peer-to-peer contact and does it significantly better than does the classroom.

Other Comparative Research

Neal argues, on the one hand, that my study is about methods of teaching, not alternative delivery. He then compares it with the Russell (1997) survey of studies that focuses on alternative delivery. His implicit conclusion is that since the Russell survey shows largely no significant differences across studies, mine must be non-significant as well. This "guilt by association" logic is precisely why research in this field is so ambiguous. Russell's review of studies of alternative delivery effectiveness is limited exclusively to studies of correspondence and television venues; furthermore, the studies are all qualitative or anecdotal. These are the very venues and procedures that motivated my experimental treatment. Comparing my study to his survey is highly inappropriate.

Moreover, Neal's assertion that most of Russell's studies assess only simple recall and therefore suggest no significant difference between virtual and classroom outcomes, is simply irrelevant to my study, which demonstrates that the differences in my classes hold for over four different types of test questions.

My Conclusions

Neal's critique of my study fails on three counts. First, he fails to establish the premise from which his argument flows: this study tested different delivery systems, not different methods of teaching. Moreover, the virtual condition was not the group condition. Each treatment had identical work in identical formats. Second, his charge that my methodology was flawed is unfounded. My study used a control group design with random assignment, which enabled me to evaluate both learning and affect outcomes. Third, he fails to present evidence refuting the fundamental point of my study—that students using virtually-based delivery of content performed better than those who didn't.

References

Neal, E. (1998, June). Does using technology in instruction enhance learning? Or, the artless state of comparative research. The Technology Source. Retrieved July 26, 1998, from the World Wide Web: http://technologysource.org/?view=article&id=86.

Schutte, J. (1997). Virtual teaching in higher education: The new intellectual superhighway or just another traffic jam? Retrieved July 26, 1998, from the World Wide Web: http://www.csun.edu/sociology/virexp.htm.

Russell, T. L. (1997). NB TeleEducation makes available the "no significant difference" phenomenon. Retrieved August 2, 1998, from the World Wide Web: http://tenb.nbcc.nb.ca/phenom/.

hidden objects gameshidden object gamesbrain teaser gamesmahjongbrick buster
View Related Articles >