As more college and university courses are offered via asynchronous learning networks (ALNs), such institutions face an important question: How can classroom assessment techniques be implemented for distance students, especially students communicating asynchronously?
Cross and Steadman (1996) define classroom assessment as "small-scale assessments conducted continually in college classrooms by discipline-based teachers to determine what students are learning in that class" (p. 8). Classroom assessment provides in-process feedback and allows instructors to implement continuous quality improvement techniques in their class (Soetaert, 1998).
Current research has indicated that classroom assessment techniques (CATs) are highly flexible tools that can be used to achieve many assessment goals:
- Cross and Steadman (1996) list more than 40 types of CATs in their book on classroom research.
- Angelo and Cross (1993) provide CATs designed to assess specific goals as determined by their teaching goals inventory.
- CATs allow feedback to be focused on specific processes, such as Chickering and Gamson's well-known "seven principles" (1987) as articulated by Graham, Cagiltay, Lim, Craner, and Duffy (2001).
- Bonwell (1999) emphasized that effective CATs can be used to implement critical thinking via active learning.
- CATs can be designed and administered to improve students' metacognition skills (Steadman & Svinicki, 1998).
In this article I first describe my attempt to adapt a specific type of classroom assessment technique to a distance learning course at Washington State University (WSU) and then provide a brief overview of CATs with possible issues to consider for future adaptations to online learning.
A Case Study in Adapting CATs to a Distance Course
In the fall semester of 1999, I taught a junior-level introduction to production management via the WSU Distance Degree Program. Twelve distance students were enrolled in this course, which was offered online and used a threaded discussion list called the Speakeasy Studio and Cafe.
The course was designed to promote student-instructor and student-student dialogue. With this in mind, I based a significant portion of the final grade on weekly postings to the threaded discussion list as well as required responses to peers' postings. The original course design emphasized teamwork on all assignments, weekly graded homework, weekly answers to discussion questions, at least three comments on peers' submissions, and three group projects.
During the third week of class, I encountered a significant problem: the threaded discussions were not going well. Answers to discussion questions generated few comments, and discussion threads were not very long (i.e., they did not generate many responses). In short, there was no extended dialogue among students on a particular problem.
I decided to try a classroom assessment technique similar to a minute paper as described by Angelo and Cross (1993) and explained in Exhibit 1 for the online learning environment. CATs have been applied online for several years, for example at Eastern New Mexico University (ENMU, 2001), but this was my first attempt. I had the advantage at WSU of having access to an online survey tool to administer the CATs. The online survey program is known as CTLSilhouette at WSU and is used to host "Flashlight Online," the online survey tool for the Flashlight Program of the Teaching, Learning, and Technology Group. Online surveys are flexible and useful tools for formative assessments such as CATs, especially for distance students studying via an ALN. They have many of the advantages of ALNs in that they are asynchronous, can be authored or taken from any computer with an Internet connection, can be anonymous, and can be adapted to proven formative assessment techniques such as CATs. Students were asked to complete the CAT (or short online survey) by a specific date.
The CAT that I used consisted of two questions:
- What is the one thing that helped you learn the most in this week's activities?
- What is the one thing in this course that is least helpful to your learning?
I had received negative feedback on the group work that I had required for the threaded discussion list. I had wrongly assumed that students were concerned about either (a) the amount of time spent coordinating group work or (b) the fact that students in other groups might see their work and steal their ideas (the Speakeasy Studio and Cafe did not allow students to create threaded discussions that only members of their group could see). The short survey revealed that my assumptions were wrong: The students' biggest concern was that other class members could see their work on the threaded discussion list before they were ready for it to be seen.
I decided to change the format of the course based on the results of the CAT. Instead of requiring that all work be done in groups, only three projects would require group work; the remainder of the coursework could be completed independently. Students were still required to post answers to weekly questions and comment on three peer postings weekly on the threaded discussion list, but they were not required to do this as a group.
Three days after the CAT was given, I posted a summary of the students' responses to the online learning environment, along with explanations of changes being made as a result of student input, including changes to the group work requirement. I also explained which student suggestions could not be implemented. This worked well. In the first two weeks of the course, before the CAT was given, students made an average of 16 postings to the threaded discussion list in response to the discussion question. The week after the changes were made, the number of postings jumped to more than 70. Many of the discussion threads were also "deeper" (i.e., an original posting generated several responses, others responded to these responses, and so on), indicating that discussions were becoming more substantive. Increases in discussion volume and commentary may be attributable to the Hawthorne Effect (see Exhibit 2) or to the fact that students became more accustomed to the technology and perhaps more interested in the new subject material. At the very least, the CAT helped me recognize my misperceptions and moved the focus of the course from the instructor (teaching-centered) to the students (learning-centered). This is one of the most important effects of classroom assessment techniques (Angelo & Cross, 1993).
Issues to Consider in Adapting CATs to Asynchronous Learning Networks
CATs in asynchronous learning networks can be notably different from face-to-face, in-class CATs; these differences may affect student responses to CATs and should be considered in their use. The following points are particularly relevant:
Students in ALNs may be at different stages in a course. Most face-to-face CATs are given during a specific class period. All students have participated in the same class activities, and CATs usually focus on those activities. Students in ALNs, however, may be at various stages: Some may have finished the same unit that others have just started. If an instructor wants feedback on a specific topic, the CAT should be worded accordingly.
Students in ALNs do not experience the same learning environment. Students taking a CAT in a face-to-face course are all in the same physical environment. The instructor does not know what kind of environment ALN students are experiencing when they complete a CAT. Students may be on the road, trying to connect via a hotel telephone; in a quiet office; or at home, trying to deal with a busy household.
Generating anonymous responses in ALNs may be difficult. Examples abound of distance education instructors adapting assessment techniques similar to CATs. Some traditional correspondence courses send students pre-addressed and stamped envelopes and encourage them to mail in their feedback whenever they want. Many online courses solicit student feedback via e-mail. In both of the above cases, instructors can determine the name of the student sending the comments, and the students know this when they make their responses. To avoid this, some instructors have the mail or e-mail sent to third parties, who remove identification from the correspondence and then forward it to the instructor. The WSU case study had the advantage of access to an online survey tool that could keep responses anonymous.
At the same time, considering some of the key success factors of traditional, face-to-face CATs remains important when considering their application in an online environment. Like face-to-face CATs, those in ALNs need to be well planned, ask pertinent questions, and return results to students quickly. With this in mind, instructors should also consider the nine-step "project cycle" (Exhibit 3) for effective CATs mapped out by Angelo and Cross (1993, p. 34). The steps of the cycle are divided into three phases: planning, implementing, and responding. In another article, Angelo concludes, "After fifteen years of working with faculty, we've learned that it is wise to start small, to limit risk-taking and time invested initially, and to share ideas and outcomes with colleagues" (Angelo, 2000, p. 2).
As an adaptable tool for online course assessment, CATs are too effective not to be used. One of our goals at the Center for Teaching, Learning, and Technology at Washington State University is to make the use of CATs in online classes easier by limiting the time investment and risk and to encourage instructors to use them. We hope to foster a learning community in which instructors share their ideas and continuously improve their classes by applying classroom assessment techniques.
Angelo, T. (2000). Classroom assessment: Guidelines for success. Teaching excellence: Toward the best in the academy, 12 (4), 1-2. North Miami Beach, FL: The Professional and Organizational Development Network in Higher Education Essays on Teaching Excellence series.
Angelo, T., & Cross, P. K. (1993). Classroom assessment techniques: A handbook for college teachers (2nd ed.). San Francisco: Jossey-Bass.
Bonwell, C. (1999, September 9). Active learning in large classes. Seminar presented at Washington State University, Pullman, WA.
Chickering, A. W., & Gamson, Z. F. (1987). Seven principles for good practice in undergraduate educationSpecial Section pamphlet. The Wingspread Journal, 9 (2), 1-11.
Cross, P., & Steadman, M. (1996). Classroom research: Implementing the scholarship of teaching. San Francisco: Jossey-Bass.
Eastern New Mexico University. (2001). CYBER CATS: Classroom assessment techniques administered and reported via the internet. Retrieved April 1, 2001, from http://www.enmu.edu/~smithl/Assess/classtech/cat.htm
Graham, C., Cagiltay, K., Lim, B., Craner, J., & Duffy, T. M. (2001, March). Seven principles of effective teaching: A practical lens for evaluating online courses. The Technology Source. Retrieved July 9, 2001, from http://technologysource.org/?view=article&id=274
Soetaert, E. (1998). Quality in the classroom: Classroom assessment techniques as TQM. In T. Angelo (Ed.), Classroom assessment and research: Uses, approaches, and research findings (pp. 47-55). New Directions for Teaching and Learning, no. 75. San Francisco: Jossey-Bass.
Steadman, M., & Svinicki, M. (1998). A student's gateway to better learning. In T. Angelo (Ed.), Classroom assessment and research: Uses, approaches, and research findings (pp. 13-20). New Directions for Teaching and Learning, no. 75. San Francisco: Jossey-Bass.platform gamesword gamesbrick bustercard gamesmarble popper games