July 1998 // Letters to the Editor
Flashlight Illuminates Assessments
by Gary Brown
Note: This article was originally published in The Technology Source (http://ts.mivu.org/) as: Gary Brown "Flashlight Illuminates Assessments" The Technology Source, July 1998. Available online at http://ts.mivu.org/default.asp?show=article&id=1034. The article is reprinted here with permission of the publisher.

I appreciated Ed Neal's piece on assessing technology and the critique of the Schutte study and the difficulties involved in a comparison model of research in educational settings. Still, I am compelled to quibble with some of his assertions about assessing learning outcomes, particularly his assertions about the relationship of the Flashlight Project to learning outcomes.

As a point of fact, the Flashlight Project does have a cost analysis model, which addresses the concern Neal has that "the evaluation does not address cost-benefit issues with respect to learning outcomes."

More importantly, when Neal argues that the Flashlight Project, by which I suspect he means the Flashlight Current Student Inventory (CSI), does not provide a direct assessment of learning outcomes, he is correct—except that the Flashlight CSI does not intend to provide a direct assessment of learning outcomes. Those who have been introduced to the Flashlight CSI should understand that in order to assess student learning, it is wise to use any information and tools available. It happens that Flashlight is a very good tool that provides very good information. Using Flashlight alongside other measures can make a good assessment even better.

I also question the purported superiority of learning outcomes implied in Neal's commentary where he notes "facts," the "ability to apply knowledge," and "critical thinking skills" as "what students [should have] learned." After all, how do we measure "critical thinking skills" and "application of knowledge"? As for "facts," perhaps the less said the better, but I wonder if the learning of "facts" in isolation really provides a superior measure of learning than does the assessment of the learning experience that the Flashlight CSI can provide. We all talk about the value of learning outcomes, but when it comes time to spell them out in ways that can really be assessed, the call seems to trail off into platitudes and a variety of other nice abstractions. In practice, the assessment of learning outcomes quickly narrows, much like the old learning objectives movement in the 30s and 40s, into collections of quickly forgotten trivia.

Meanwhile, our assessments at Washington State University (WSU) have supported a correlation between positive Flashlight findings about student learning experiences and improved grades. Those positive findings that correlate with improved grades, incidentally, reflect the kind of experiences I suspect Chickering and Gamson would encourage—active learning, more time on task, more time talking about course content with faculty and peers and experts, and a whole variety of other findings hard-core outcomes addicts might call soft. My personal sense of that correlation, however, is that the relationship between grades and substantial learning experiences means that grades may have a bit more validity than I initially would have suspected.

My most important criticism of Neal's critique of Flashlight is that he represents it as a tool that only assesses the "useful" aspects of technology. Actually, the Flashlight CSI, when used correctly, helps assess students' experiences in many ways, including their interaction with faculty, students, ideas, visual material, course concepts, etc. We have even used Flashlight at WSU to assess peer facilitation in ways that are independent of technology. The comparison made possible by this approach, that between students' experiences with peer facilitation and their experiences with several different technologies, was actually quite illuminating. And it is vital to note that there is an important distinction between an assessment of student learning experiences and one of student opinions or evaluations—which is another can of worms I won't open here.

Finally, Neal's suggestion that Flashlight does not help compare whether students "could have learned better or faster without the technology" is somewhat inconsistent with his criticism of Schutte's study when he points out that students "in the virtual class experienced a completely different method of teaching from those in the traditional class." Since this phenomenon is usually the case when we compare technology-based classes with those that do not have technology, it is in many ways not entirely or even particularly useful to insist that an assessment compare between groups—especially when we consider the difficulties Neal points out earlier in his essay. Two groups usually have more differences than similarities. Since Flashlight is very good at helping us examine the different experiences the same students have in the same classes using different technologies, in many ways it is often more powerful than the pure research model most of us hold in our heads. I don't think the challenge, for those of us in faculty development, is to compare virtual instruction with traditional instruction in order to determine which works better. Instead, we need to understand how different approaches to instruction influence the learning process. When we do that, we get a better handle on how we might alter our approaches to improve practice.

downloadable pc gamesdownloadable gameskids gamesbrick busterword gamestime management gamesbest pc gameshidden object games
View Related Articles >