October 1998 // Letters to the Editor
Illuminating Objectives: A Multifaceted Challenge
by Gary Brown
Note: This article was originally published in The Technology Source (http://ts.mivu.org/) as: Gary Brown "Illuminating Objectives: A Multifaceted Challenge" The Technology Source, October 1998. Available online at http://ts.mivu.org/default.asp?show=article&id=1034. The article is reprinted here with permission of the publisher.

In response to Ed Neal’s criticism (Neal, 1998), I think it is important to point out that the Flashlight Current Student Inventory can and has been constructed to support traditional, controlled empirical research—Neal's preferred approach and the basis of his criticism of Flashlight. After all, the ongoing quantitative/qualitative debate has been and is perennially conducted elsewhere quite thoroughly. Still, it is important to point out that even the purely rational model Neal upholds assumes a practice of assessment that includes (or should include) the comprehensive assessment of all aspects of an educational strategy under discussion. As the fourth principle in the American Association for Higher Education's Assessment Forum states: "Assessment requires attention to outcomes but also and equally to the experiences that lead to those outcomes" (Astin et al., 1996). It is, incidentally, precisely such experiences that lead to those outcomes that an evaluator using Flashlight can articulate and illuminate extremely well.

I am interested in continuing this discussion on learning outcomes, whether in defense of the Flashlight Project or simply in general, and will overlook several additional aspects of Neal's argument that merit scrutiny (such as Neal's contention that the learning benefits of smaller classes have been clearly established—as much as we all would like to believe such an assertion, it just isn't so). Neal confuses assignments or learning tasks with operationalized objectives. He provides examples of how to operationalize learning from a course on "Women and Gender in Latin American History" (Neal, 1998), and offers the following "operationalized" objectives:

  • Explain the differences between feminist and socialist analyses of women's subordination.
  • Identify the reasons why women participated or chose not to participate in revolutions.
  • Construct an argument: Does revolutionary change improve conditions for women?
  • Identify the themes women addressed in their artistic and literary works, and the reaction of society to such women.
  • Critically evaluate the differences between norms and behavior. Can women depart from the norm without directly challenging it?

These objectives may well be used to encourage learning. However, what Neal does not provide is an assessment methodology an instructor or evaluator might use to determine valid and reliable assessment of the work students will generate in response to these assignments.

Specifically, how are we to assess the quality of students' explanations and analyses of "women's subordination"? How will a team of evaluators, let alone a single instructor, assess the quality of the learning depicted in students' identification of women's participation in revolutions? How does one assess the quality of a student's "argument"? How do we evaluate, with reliability and validity, students' discussions of the "differences between norms and behavior"?

Remember, too, that we have to make such assessments without grading students, since Neal himself dismisses the validity of grades by agreeing with Milton that "To believe that grades have any measurement validity flies in the face of logic and experience" (Milton et al., 1986 in Neal, 1998).

Neal's objectives are solid, but they do not easily elicit the measurable outcomes Neal desires unless we also do the considerably more difficult work of classifying and defining our objectives in behavioral terms, developing or selecting measurement techniques, establishing reliability and validity for those techniques, collecting the data, and then comparing student performance with our objectives. And if we really want our empirical assessment to be meaningful, we need pre-tests on the same measures to increase the probability that it is our instructional strategies that are making the difference.

Finally, should we successfully accomplish such a comprehensive assessment, we are not necessarily certain we will be able to transfer what we learn by using that process from one context to another. In other words, Neal's depiction of learning outcomes, even in traditional objectives-oriented terms, takes us no nearer to understanding how we might evaluate students' responses to the assignments, with or without technology.

Ed Neal's response to Ehrmann's letter and mine does bring up a critical issue relative to the assessment and purchase of new technologies, however. No technology will clarify the ubiquitous misperception of learning outcomes. The tendency to oversimplify the learning process by reducing it to a single, linear measure is not perpetuated solely by the press who publish standardized test scores without consideration of the complex teaching and learning processes that yield them, but by many within our professional sphere as well. We must investigate similar misperceptions within academia if we are to be able to effectively implement an outcomes-based learning approach.


Astin, A. W., Banta, T. W., Cross, K. P., El-Khawas, E., Ewell, E., Hutchings, P., Marchese, T. K., McClenney, K. M., Mentkowski, M., Miller, M., Moran, T. E., & Wright, B. (1996, July). 9 principles of good practice for assessing student learning. AAHE assessment forum [On-line], Retrieved September 28, 1998 from the World Wide Web: http://www.aahe.org/assessment/principl.htm.

Neal, E. (1998, September). Finding Flashlight in the dark: A reply to Steve Ehrmann and Gary Brown. The technology source [On-line], Retrieved September 28, 1998 from the World Wide Web: http://technologysource.org/?view=article&id=44#neal.

pc gamessimulation gamesmahjongmanagement gameskids gamestime management games
View Related Articles >