Assessment does not sit easily at the table of higher education priorities. In responding to critical pings on the radar during an accreditation process, campus officials will often scramble for evaluation data. More often that not, however, their attention wanes when the process is over. Assessment comes in like a distant relative that overstays his welcome and disappears until the next visit.
How do we fail to sustain a process that creates meaning and improves learning? Differences in understanding, in language, and in institutional commitment to learning outcomes prevent most of us from dedicating ourselves to the hard, meaningful work of assessment. In this article, I outline the various challenges that arose during a particular assessment initiative at my home campus, Arizona State University (ASU) West. I also make recommendations to those who want to establish broad-based assessment programs at other institutions.
The NLII Transformative Assessment Workshop
In autumn 2002, ASU West joined an 8-week virtual workshop sponsored by the National Learning Infrastructure Initiative (NLII) and open to institutions that had participated in the NLII 1-day focus session on strategic alignment and assessment. The purpose of the workshop was to explore the potential for transformative assessment in the institutions that participated.
"Transformative assessment" was explained as a process of evaluation, change, and improvement that happens when a campus is in alignment across its different departments or programs; it is a systemic approach to assessment that is meaningful and responsive to an institution's mission and goals. Participating institutions were asked to put together a team that would explore and document the transformative assessment practices discussed during the workshop. Virtual workshop attendees would connect to the NLII site weekly, do their homework, participate in case studies, debate, reflect, and expose their weaknesses and worries to colleagues all over the country.
ASU West agreed to join, compelled by the participation of several "transformative" visionaries who would join us in cyberspace: Vicki Suter and Carole Barone of the NLII group, Steve Ehrmann and Robin Zuniga of the Teaching, Learning, and Technology (TLT) Group's Flashlight Program, Joan Lippincott of the Coalition for Networked Information, Gary Brown of Washington State University, and Chuck Dzubian of the University of Central Florida. These assessment experts would volunteer valuable time to review and critique our case studies and proposals.
Armed with assignments, our team began. Although an invitation to our weekly meetings was sent out to all faculty members who might be interested in assessment, only the arduously recruited arrived for free donuts. The standing committee consisted of the head of our academic senate, the director of Institutional Planning and Research and his campus assessment expert, the associate vice provost responsible for assessment, and the information technology (IT) director responsible for teaching and learning initiatives. Endorsements came in from the provost's office and our campus outcomes assessment team. We began enthused and optimistic, driven by a noble NLII experiment in mining knowledge throughout the national community.
Ten Barriers to Successful Assessment Initiatives
How could we fail? Let us count the ways.
1. Faculty perceptions. Faculty members do not endorse "assessment." They do not endorse it because they do not like the word. For them, the word "assessment" is from a language based in administration, accountability, and bureaucracy (although if you ask faculty members to discuss "how we know that our students know what we want them to know," most of them will eloquently express individual dedication to meaningful results). Moreover, most faculty members are not interested in broader initiatives that seek alignment in assessment standards. They tend to perceive everything outside the classroom as either instructional support or an unwanted intrusion, with assessment falling in the latter category. Faced with such attitudes at ASU West, our committee tried to cultivate a new, more systemic understanding of assessment and its benefits.
2. Lack of institutional commitment. Accreditation is the assessment driver, and our review was still 3 years away. Within this time frame, our administration had other priorities. Administrative leaders remained supportive, but uninvolved and unwilling to commit; they expressed interest in our project proposal, but they hoped that this project would not need funding and were not available to help implement it.
3. Short memories. Academic institutions have very short memories. Three years had passed since ASU West had agreed to establish seven campus-wide learning outcomes. Arriving at a consensus on the outcomes had been an arduous processyet a faculty survey provided clear evidence that most of us could recall no more than three of the seven outcomes, and no one could recall efforts to assess the outcomes in any meaningful way.
Memories of failure are just as fleeting when new campus initiatives are considered. As Ehrmann (2001, "Closing Thoughts," ¶ 2) reminds us, "Ready, fire, aim!" thinking has led to 30 years of failure to learn from the past, leaving no collective memory and making individuals either gung-ho or gun-shy.
4. Inadequate time investment for serious effort. Educational institutions measure time in semesters and believe that renewal is inherent in a new academic year. Projects have a greater chance for implementation if they are proposed within the yearly cycle of renewal. By the time we finished our project, the cycle had ended at our institution. Members of the campus outcomes assessment team, guardians of the assessment gate, had left campus for the summer. Teams dissolve. Impetus ends. The gate is locked all summer.
5. Limited buy-in across the silos. Last-minute feedback was disappointing. Implementation plans were ignored by academic program chairs who did not see it in their best interests to collaborate or change. They weighed in late, often to contradict or ignore input gathered by their own representatives; they claimed not to remember the project; they did not take time to read the proposals, disapproved of our experts, preferred their own plans, or remained neutral.
Assessment includes change, and it may be blocked by the constituency most committed to the smooth operation of the status quo. Transformational change must take into account the fear, resistance, and "what's in it for me?" concerns of middle management.
6. Disagreement about definitions. Our committee included a wonderful group of determined people and dedicated stakeholders. However, we were diverse individuals separated by disciplinary jargon that can often signify anything and mean nothing.
This was especially the case in our different responses to the NLII READY tool, which we consulted in order to explore our ability to align action with strategic goals. The areas that we examined included policy, project selection, and assessment practices. Our various stakeholders could not agree on the appropriate criteria in any of these areas. We could identify campus goals, but when asked to imagine metrics for achieving them, alignment became a distant dream. We were not aligned on the core meaning of words or the intent of outcomes.
For example, the concept of learner-centered practices raised serious questions for some participants: Are "good" traditional lectures learner-centered? Why is the lecture out of vogue? The term "technology-enhanced learning" similarly inspired different interpretations: If the technology is in place, is our job complete? If not, what are the specific measures for technology-enhanced learning? Determining the campus level of commitment to "open process" was especially harrowing: How "open" should a process be? Should all administrative meetings be open to faculty? Should all agendas be published and disseminated? Assessment is difficult when core definitions contain a diversity of meaning, intent, and interpretation. Alignment of action appears impossible when meaning is elusive.
7. Disagreement about basic outcomes. Like the American melting pot, a university is many diverse cultures stirred together in one glorious, colorful stew. The challenge is to create a campus vision that allows each culture to do what it does best, while encouraging all to compromise for the common good. Despite goodwill and best intentions, the diversity of our committee often left us at odds. It is difficult for faculty members to trust the administration, for administrators to take risks associated with change, for IT to believe that the solution is not in better technology, and for planning and budget officials to understand why positive numbers alone do not make a convincing case for changing time-honored traditions.
After collecting and analyzing data, our interdisciplinary committee often could not reach full consensus on desirable outcomes. Decision by committee, as Dilbert might explain, is an exercise in compromise that can hasten the death of all good ideas. Only when we worked together on issues strategically aligned with clear, campus-wide goals did we find sufficient support and agreement for a final proposal.
8. Limited resources. The resources needed for assessment include time, energy, money, individual commitment, and deliverables. Too often they are tapped from the same sources over and again. These resources are much harder to muster when an institution has difficulty accepting and embracing the cost of assessment. Assessment advocates know that collecting data with no commitment to make meaningful change is a waste of scarce resources. The time spent negotiating resistance could instead be spent creating improvements in teaching, learning, and support services. Transformative assessment, based on a foundation of strategic alignment in goals, presupposes an institutional commitment of resources and a culture that rewards participation in the process.
9. Inadequate emphasis on learning. Assessment is and will always be about learning. Searching for benchmarks, our team found endless solutions that replace teaching with self-serve computer tutorials and simulations. Flash is not the answer. The guaranteed outcome of slapping untested technology onto instruction is an unhappy crowd of learners and a reluctant crowd of instructors. Online or face-to-face, students value stable interactions with reliable instructor contact and feedback. Brown and Duguid (2001) contend that we have always depended on an ineffable social exchange of meaning that has intrinsic cognitive value, is hard to measure, and must be preserved. How do we assess what is hard to measure?
Diverse research agrees that learning is best served when it is social, active, and engaged. Change driven by technology alone or by strictly economic models of efficiency is never in alignment with the mission of a university or the needs of its learners—pace those who envision the future of higher education in fast, for-profit centers of technology and enterprise. Dedicated faculty will continue to protect the temple of reason and the needs of students who seek knowledge and meaning. Deciding where to draw the line, however, can be a challenge when a campus begins addressing core issues of survival, efficiency, and a changing student population. Only our collective alignment on strategic goals for teaching and learning can prevent the siren call of untested technology as the remedy.
10. Complex questions. Even with terminology, goals, and priorities in agreement, the questions raised in assessment remain fundamentally complex. What should we change? How? Why now? Does the data really suggest that solution or innovation? Were all variables and alternative interpretations considered? Decisions should be made with a focused understanding of the data and the institutional mission. Unfortunately, this means asking objective questions and being prepared to face the implications of the answers.
As our committee learned, there are harder questions to answer as well. Are we truly willing to make unexpected changes determined by the data? Is the planned change more than a band-aid? Is the change "transformative?" Does the result take us where we need to be, or is it influenced by the latest higher education fad, fix, or trend? There are no easy answers to complex questions. Transformative assessment practice lies in ruthless inquiry and a determined commitment of resources.
Conclusion and Recommendations
Assessment is hard, and transformative assessment is harderbut it is worth the effort when meaningful change and community result. At ASU West, our commitment to learning outcomes and to the value of assessment is now continuous. We do it because it is the only right thing to do. We do it because it matters. The future of higher education and the quality of learning depends on responsiveness to the changing needs of stakeholders: students, faculty, parents, industry, alumni, and the community.
We highly recommend a few sure steps in the assessment process. As an initial step, form a campus team of risk-takers and consensus-seekers, and solicit broad participation across different sectors of your campus. Be sure to seek administrative support and involvement. A representative from your president or provost's office is crucial to your goals. Your team must have the institutional authority to make a difference, and a culture of assessment begins at the top. In turn, ask department chairs to give your team time to address departmental meetings. Pursuing the topic with faculty development officers on campus will further expand your assessment network. Have faith: Generally it is process, not intention, that is missing.
Meet regularly, and keep the process on the campus radar with regular events and updates. Join the NLII's national community of practice, where you will work with dedicated, generous experts to create shared knowledge and resources. Send someone to the next NLII session on assessment, as well as to the yearly assessment conference sponsored by the American Association of Higher Education (AAHE). Support from your colleagues, nationwide as well as on campus, will strengthen your resolve and keep practice alive.
To establish a common body of knowledge, ensure the accessibility of assessment resources on your campus. For example, ask faculty development officers to make Pellegrino, Chudowski, and Glaser's (2001) Knowing What Students Know widely available; at present, no book offers a better foundation for understanding the nature of good assessment practice in the classroom and in large-scale contexts. Introduce new faculty members to Angelo's (1999) work on improving student learning, to the efforts of the TLT Group, and to the peer-reviewed learning objects available in MERLOT. Produce and promote a Web site of campus resources for assessment. A culture of assessment depends on a shared faculty understanding of the language and the goals, and the dissemination of scholarship will help provide the foundation for consensus.
No matter how it sometimes appears, remember that you are not alone. Seated at the many tables of higher education are colleagues working together to share assessment of who we are, how we are doing, and how we will get where we intend to go.
Angelo, T. A. (1999, May). Doing assessment as if learning matters most. AAHE Bulletin. Retrieved September 25, 2003, from http://www.aahebulletin.com/public/archive/angelomay99.asp
Brown, J. S., & Duguid, P. (2000). The social life of information. Cambridge: Harvard Business School Press.
Ehrmann, S. (2001). Technology and educational revolution: Ending the cycle of failure. Retrieved September 25, 2003, from http://www.tltgroup.org/resources/V_Cycle_of_Failure.html
Pellegrino, J. W., Chudowski, N., & Glaser, R. (Eds.). (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academies Press. Retrieved September 25, 2003, from http://www.nap.edu/books/0309072727/html/adventure gamesmahjongtime management gamesbrain teaser gamescard gamessimulation games