The “outcomes assessment” movement has achieved overwhelming acceptance in American higher education. According to findings released this week by the Association of American Colleges and Universities (AAC&U), a survey of 433 colleges and universities showed that 78% of them claim that “they have a common set of intended learning outcomes for all their undergraduate students.”
The AAC&U press release on April 28 announcing the findings, however, is rather short on detail about where this sweeping academic reform came from. The concept most recently made news when former Secretary of Education Margaret Spellings attempted to impose an “outcomes assessment” regime on colleges and universities through the accreditation system. Her proposal reflected the recommendations of The Commission on the Future of Higher Education, which she convened in 2005 and which issued its report in fall 2006.
Spellings’ proposal met a chilly reception from the accreditors and many others. I criticized it in “Prove You’re Not Stupid” and “Spellings Bee.” Along with others I criticized the attempt to apply “outcomes assessment” to the liberal arts and the implication that the federal government ought to be in a position of specifying the details of college curricula. Much of the education establishment, however, seemed more worried by the source of the idea than its content.
In fact, “outcomes assessment” had been gaining ground with accreditors since the mid 1990s. “Outcomes assessment” is a stepchild of the Total Quality Management (TQM) movement that was popular in American business in the 1980s. Applied to higher education, “outcomes assessment” initially meant asking colleges to develop tools to show that students were reliably learning what the college claimed to teach. But like TQM, “outcomes assessment” was tied to a concept of “continuous improvement.” The goal was to encourage colleges to collect data on what worked well and what could be enhanced.
The major challenge was that in the eyes of “outcomes assessment” advocates, the grades that students received were of little use as an assessment tool. That’s because grades might measure individual performance in the class but they showed nothing whether the class itself advanced larger curricular aims.
As the regional accreditors became enthusiastic about “outcomes assessment,” colleges and universities had to scramble to find ways to measure things they had previously taken for granted and some things that are probably intrinsically unmeasurable. Within a few years, the accreditors upped the ante yet again by adding to “outcomes assessment” another component called “institutional effectiveness.” The latter applied to the same principle of creating quantitative metrics to all the nonacademic functions of colleges and universities. Today virtually all of the accrediting bodies apply these twin principles.
AAC&U president Carol Geary Schneider is quoted in the press release extolling what she sees as “an important shift in focus for American higher education away from measuring progress by students’ seat time and accumulation of credits toward clarifying more transparently what students are expected to learn.” She sees “outcomes assessment” as something that promotes “practices that help students both achieve learning outcomes and also demonstrate their achievement across multiple levels of learning.” I don’t pretend to know what she means, but obviously it is an occasion for joy. Having spent over twenty years as a college and university administrator, I have to say I was never very diligent in measuring students’ progress by their “seat time.” In the places where I have taught and provosted, we tended to look for evidence that students were growing in their command of important ideas and significant books and developing in their intellectual maturity as thinkers and writers. I just don’t know how we got along without either “seat time” or “outcomes assessment.”
Has American higher education witnessed a renaissance in pedagogy and college management as a result of these innovations? If so, no one seems to have noticed. In fact, widespread apprehension over the quality of American college education seems only to have grown. The National Association of Scholars shares this apprehension. We have seen and we frequently reported on declining academic standards and the ease with which colleges and universities trade substantive intellectual requirements for mere entertainment or ideological indoctrination. “Outcomes assessment” appears to have proven no bulwark at all against this decline in quality. It is a watchdog fast asleep with his jowls resting on his statistical bone.
Thus we take the AAC&U report with a grain of salt. Neither the AAC&U press release nor the report itself mention that the widespread embrace of “outcomes assessment” by colleges and universities was driven by accreditors. It may well be that by this point many colleges and universities have internalized the concept and that there are now full-time administrators devoted to the task of translating course syllabi and academic programs into the rubrics of “outcomes assessment.” This creates a presumably permanent constituency for the idea.
But it’s not cheap. We have spoken to several college and university administrators who complain of having to divert countless hours and significant resources to retrofitting courses into the “outcomes assessment” rubric. It may well be that the biggest outcome of “outcomes assessment” has been its contribution to college costs. We’re reminded again of the report last year that showed the enormous growth in college administration at the expense of faculty.
We eagerly await AAC&U’s assessment of that outcome.