July/August 2003 // Commentary
Opening a Can of Worms: A Conversation about the Ethics of Online Student Evaluation of Teaching
by Coralie McCormack, Andrelyn Applebee, and Peter Donnan
Note: This article was originally published in The Technology Source (http://ts.mivu.org/) as: Coralie McCormack, Andrelyn Applebee, and Peter Donnan "Opening a Can of Worms: A Conversation about the Ethics of Online Student Evaluation of Teaching" The Technology Source, July/August 2003. Available online at http://ts.mivu.org/default.asp?show=article&id=1034. The article is reprinted here with permission of the publisher.

Judgments about the quality of teaching and decisions about promotion and funding are now based, in part, on student evaluations of specific instructors. In this context, Nuemann (2000) predicts that institutions will come under "increased scrutiny on issues such as sound evaluation practices" (p. 121). Ethical practices are an integral part of a sound evaluation framework, but ethical evaluation works only when it is open to critical scrutiny. Although few academic conversations have examined evaluation procedures through an ethical lens, the significance of such scrutiny is underscored by Theall (2000) when he suggests that putting student rating systems online "may create massive problems in the areas of confidentiality and privacy" (¶ 9). Cummings and Ballantyne (2001) also voice concerns about confidentiality. A conversation on this topic is needed.

This article shares part of an online discussion among three archetypal colleagues—Gung Ho, Al Ternative, and Ima Moderator—who have critically examined their delivery of online courses to off-campus students. Online instruction is part of the everyday practice of many academics, and both individuals and institutions are expected to judge that instruction. The following excerpt raises two questions that warrant further discussion: Is student evaluation online different from evaluation in other contexts? What ethical and legal considerations are raised when evaluation goes online?

Gung Ho: Ima, I have been thinking about the question you raised in your last posting: "Is student evaluation online different from, or similar to, student evaluation in other contexts?" I think the purpose of evaluation is the same whatever the context. Evaluation is "about learning from our students, and their learning, and learning about our teaching" (Cannon, 2001, p. 86). For me, evaluation occurs as part of my everyday teaching practice. It is on-going throughout the semester. Every aspect of my teaching interactions with students serves as "input" into the evaluation process, regardless of whether it is formally collected or labeled "evaluation data."

Al Ternative: I agree with Cannon's statement of the purpose of evaluation. But with regard to your second point, Gung, I would like to offer an alternative opinion. While I agree that teachers are always monitoring interactions between themselves and their students and using this as informal intelligence to improve their teaching, I would not equate this process with evaluation. Evaluation for me is a formal, purposeful, and systematic inquiry. Its inputs are drawn from specific activities bounded by particular time limits.

Gung Ho: Al, I think that different views on the form and timing of evaluation input are particularly significant when we compare student evaluation of instruction in online and offline contexts. Adopting an "everything is data" approach in an online teaching context encourages us to consider a greater variety of sources, such as e-mail messages and bulletin board discussions, as inputs into the evaluation process. The online teacher has a written record of conversations that would not be available if these conversations occurred offline, during a tutorial or in the corridor after a lecture. In the online context, the amount of information—particularly qualitative information—also increases, as does the specificity of the data. The categorization of interactions as teacher-student or student-student, for example, means that more complex qualitative methods of analysis (such as discourse analysis) can be used.

Al Ternative: Online evaluation, especially with some course management systems, can provide extensive data. In fully online courses, the totality of interactions between teaching staff and students, and among students themselves on bulletin boards, can be archived. The data may include how often and exactly when students accessed the system, the time spent on particular files, and even transcripts of chat sessions. These features do not necessarily differentiate online evaluation from evaluation in other contexts, however. They simply represent efficient mechanics in data collection. It could be argued, for instance, that if on-campus lectures and tutorials were videotaped or tape-recorded, one might in fact obtain richer data because of the paralinguistic features available for analysis.

Gung Ho: Your comparison raises another interesting point. To record and transcribe tutorial conversations, teachers would need the approval of the Human Ethics Committee at their institution—and they would know to seek it. In online learning environment, however, there are insufficient safeguards in terms of confidentiality, anonymity, and informed consent. To show how student grievances may arise in this area, let me present the following hypothetical e-mail from a concerned student:

I am a student in a fully online degree program. Last year I completed an online evaluation questionnaire for a course in this program. Coincidentally, my cousin studies the same course on-campus, where he attends lectures and tutorials. He also completed the online evaluation. Imagine our consternation when we recently saw a published article that contained some of our evaluation comments, as well as the verbatim text of several of our postings to classmates on the bulletin board. We were dismayed and confused about our rights in this situation. When we posted those comments, we had no idea that they would reappear later, out of context and in another location.

While I recognize that many students would not take the time to initiate such complaints, this example illustrates the prickly ethical issues that students legitimately could associate with a lack of transparency in the evaluation process.

Al Ternative: I share your concern. Online student evaluation has the potential to open a can of worms! If student comments may be reproduced, then teaching staff and students must have a clear understanding of what constitutes student evaluation and what safeguards exist for its use—especially if that use occurs in research and publication. As suggested by the example you share, privacy and copyright are important concerns. I suspect that anyone who posts a message owns the copyright to it. The following statement, which was adapted from the eModerators discussion group, could be used:

Copyright [NAME] 2003. Permission is hereby granted for the redistribution of this material over electronic networks, in departmental publications, or external publications by the teacher so long as this item is redistributed in full and with appropriate credit given to the author. All other rights reserved.

The adoption of this practice would be in accordance with a legal view that the student owns the copyright and has rights that require attribution when personal postings are reproduced. Alternative institutional arrangements could be established by agreement; this is likely to be the case since most institutions have neither the time nor the resources to enter into the logistics of recording signatures. Additional issues in this can of worms are how to extract representative contributions from bulletin board discussions and chat sessions, and whether to consult participating students in this process. I suspect that most institutions would prefer to ignore these complications or introduce a simple, easy-to-implement waver—but would this adequately address the issues we have discussed? Would the waver be mandatory in all cases, or only in some cases? What about chat conversations in which multiple students participated, but only some of those students signed a waiver for future reproduction of their comments?

Ima Moderator: Gung and Al, your thoughtful postings have caused me to reconsider my evaluation practice! However it is defined, evaluation is a means to improving teaching and not an end in itself. Whether the "means" are online or offline and the location is on-campus or off-campus, I agree that evaluation must be ethically sound in order to protect confidentiality and ensure informed consent. Evaluation must be "fair to all and seen to be fair." Yet our discussion shows that it may be difficult to determine where to draw the line between what should be an ethical practice and what should also be a legally protected practice in cases like this.

It seems that the can of worms our conversation has opened demands attention, not only by our teaching team but also by our colleagues across the institution, whatever their teaching context. A discussion of the issues we have raised needs to be initiated NOW. Maybe we can combine our ideas into a paper and present it at the next meeting of our Human Ethics Committee?

Coda: And that is what they did. The conversation on our campus continues. Our invitation to the readers of this article is to start the conversation in your area. Reflect on your evaluation practices and those of your institution. Raise the awareness of ethical issues in this context.


Cannon, R. (2001). Evaluating learning or evaluating teaching: Is there a difference and does it matter? In E. Santhanam (Ed.), Student feedback on teaching: Reflections and projections. Refereed proceedings of Teaching Evaluation Forum held August 28-29, 2000 (pp. 81—92). Perth: Organisational and Staff Development Services, The University of Western Australia.

Cummings, R., & Ballantyne, C. (2001). Online student feedback surveys: Encouraging staff and student use. In E. Santhanam (Ed.), Student feedback on teaching: Reflections and projections. Refereed proceedings of Teaching Evaluation Forum held August 28-29, 2000 (pp. 29-37). Perth: Organisational and Staff Development Services, The University of Western Australia.

Nuemann, R. (2000). Communicating student evaluation of teaching results: Rating interpretation guides (RIGS). Assessment and Evaluation in Higher Education, 25(2), 121—134.

Theall, M. (2000, November/December). Electronic course evaluation is not necessarily the solution. The Technology Source. Retrieved January 18, 2002, from http://technologysource.org/?view=article&id=108

kids gameshidden objects gamestime management gamescard gamesmatch 3 games
View Related Articles >