How do we know whether American universities are really educating students?
In 1996, the National Association of Scholars published a report, “The Dissolution of General Education: 1914-1933.” Based on the hypothesis that general education requirements had dissolved since the 1960s, NAS studied the fifty top-ranked schools listed in
In 2002 our Arizona affiliate examined how much graduates of Arizona’s three public universities knew about history, science, math, literature, arts, civics and other subjects, to see what taxpayers were getting in return for their support of the state’s universities. The affiliate issued a 40-question survey to 167 college seniors. The question with the most correct answers was the pop culture question, “Identify Snoop Doggy Dogg” and the question with the fewest correct answers was, “Who was the father of the U.S. Constitution?”
This year, our sister organization the American Council of Trustees and Alumni launched a new website, WhatWillTheyLearn.org, which grades colleges based on the courses they require. It looks for requirements in “core subjects” of composition, literature, foreign language,
The assessment movement to quantify “student learning outcomes” is another approach to measuring higher education’s success. But one problem with tallying up outcomes in this way is that some educational outcomes cannot be easily quantified—nor should they be. Another problem is that having professors submit “student learning outcomes” gives them an incentive to aim low. A professor who achieves easy goals may look good on paper but will probably fail to teach students to grapple with complex concepts. NAS has examined the outcomes movement in “Seat Time at the AAC&U” and “LEAPs and Bounds.”
Kevin Carey, policy director of the Washington-based think tank Education Sector, also has some ideas for measuring higher educational success. In an interview with Time Magazine this week, he argues that colleges must be held accountable. For one thing, he says, colleges should publish results of standardized tests such as the Collegiate Learning Assessment (CLA). For another, schools should graduate “a reasonable percentage of [their] students compared with other universities that have similar students.” Perhaps not such a high percentage (98%) as Harvard, he adds. “That’s probably too high. I’m pretty sure you’d have to shoot somebody not to graduate from Harvard.”
Carey recommends that state governments should the ones holding schools—public and private—accountable for these things. He concludes that unlike K-12 education, higher education is generally assumed to be doing a good job and not needing to change. “Colleges do more than anyone to perpetuate that myth,” Carey says.
NAS has long been calling for greater transparency in American higher education, and in that we join with Carey’s accountability prescription. As for graduation rates, he is right in observing that 98% is too high. Graduation inflation results from grade inflation. Students need to be challenged intellectually, to stretch the limits of their minds, and even to have a legitimate fear of failing if they don’t put in good work. Attempting to graduate certain percentages of students—albeit lower ones than Harvard’s—is tricky because, like outcomes assessment, it incentivizes a decrease in rigor for the sake of the numbers. Carey raises some good points, but we need to be careful in the ways we quantify success. It’s all too easy for attempts to ensure quality education to backfire and make things worse.
So how do we know whether students are really learning what they should? Should we:
- Examine required courses,
- Publish test results,
- Assess “outcomes,” or
- Measure graduation rates?
-
Other
A and B seem to be the best methods. What do you think?