Tuesday, November 18, 2003

What Makes A College Good?


Tom Terrific wrote from Madison, WI touting an article in the December issue of Atlantic Monthly. Drat! The (Fair & Balanced) Blogger went to the AM Web site and the online content doesn't include articles from the December 2003 issue just this blurb:

The Bubble of American Supremacy
by George Soros

A prominent financier argues that the heedless assertion of American power in the world resembles a financial bubble—and the moment of truth may be here. "The dominant position the United States occupies in the world," he writes, "is the element of reality that is being distorted. The proposition that the United States will be better off if it uses its position to impose its values and interests everywhere is the misconception. It is exactly by not abusing its power that America attained its current position."


The Atlantic Monthly Web site did contain something else, though. If this be (fair & balanced) envy, so be it.





[x Atlantic Monthly]
What Makes A College Good?
A new survey seeks to get behind the well-publicized—and much criticized—college rankings and measure schools by how good a job they do of actually educating their students
by
Nicholas Confessore

This fall some two million high school seniors will apply to one of the thousands of accredited colleges and universities in the United States. It will prove to be the most daunting, anxiety-inducing experience many of them have yet had. The fact that most colleges today are more selective than they were a decade ago has filled the college-admissions process with a sense of risk and scarcity—and that in turn has driven a steady increase in the number of schools to which seniors typically apply. Ever rising prices mean that a college education can be as expensive as a starter home. But instead of bricks and mortar one is buying something that is intangible and yet, seemingly at least, life-determining: among some parents there's a strong belief that failure to attend a name-brand school will cut their children off from a bright future. So students are under great pressure to find—and get into—the "right" schools. With the average public school college counselor laboring under a workload of about 500 students, consumers of higher education must rely on word of mouth, on their impressions from college visits, on name recognition—and on some of the hundred or more college guides and rankings that are published each year, from the encyclopedic Fiske Guide to Colleges to the Princeton Review's The Best 351 Colleges.

The most widely read of these is U.S. News & World Report's "America's Best Colleges," a regular issue of the magazine that was first published in 1983 and today reaches an audience of nearly 11 million people. U.S. News pioneered and largely legitimated the idea of "objective" comparative measures of a school's quality—an idea that has come to permeate the higher-education culture. Colleges pay attention to rankings because a higher ranking one year can bring a flood of new applicants the next, whereas a lower ranking can cause a falloff. Prospective students and their parents pay attention because U.S. News-style rank seems a fair way to gauge whether a school would give them their money's worth. As Steve Goodman, a private education consultant based in Washington, D.C., told me recently, "They say, 'I'm willing to take a second mortgage out for a school I've heard about that I presume is of good quality, but not for one that I've never heard of that may or may not offer a quality education.'"

For education analysts, teachers, and a handful of outspoken university presidents, however, the growing influence of college rankings has for years been a source of deep concern. They believe that rankings not only have distorted the admissions process but are symptomatic of a broader corruption of American universities: administrators, they say, have reshaped their institutions to pursue goals that may not aid—in fact, may actively subvert—the purpose of higher education. Not until the 1990s, when college guides became a growth industry, did it really dawn on critics that college rankings were also providing kids and their parents with something desirable: reliable hard data that could be used to compare a wide array of schools and pick one out of the clutter. To reduce the relevance of one sort of ranking the critics would have to provide another: an alternative measure of educational quality based on a new standard to which institutions could aspire. They would, in other words, have to find a different way to answer the basic question faced by so many high school seniors and their families each year: What makes a college good?

hanges in the U.S. News ranking system since its origin twenty years ago suggest how powerful the demand for hard data about colleges has become. The first U.S. News ranking divided schools into broad categories and asked university presidents to rate the best schools within their peer group. That is, it was basically a popularity contest—or, as the magazine called it, a "reputational survey." Most colleges ignored the results, but the issue was snatched up from newsstands, and U.S. News published the rankings again in 1985 and 1987. The following year the magazine decided to launch them as an annual feature, bundled with stories about higher education and titled "America's Best Colleges." To make the rankings less subjective, U.S. News also began to gather a wide variety of figures. For the past fifteen years the areas in which colleges are measured have remained roughly constant: along with peer assessment they include retention, which counts both a college's graduation rate and the percentage of freshmen who return the following year; faculty resources, including such statistics as student-faculty ratio and average faculty salaries; student selectivity, which factors in a school's acceptance rate along with the SAT or ACT scores and high school class ranks of the students who enroll; an institution's per-student spending; and the alumni-giving rate. Another measure, called graduation-rate performance (which compares a school's expected and actual graduation rates), was added in 1996.

All told, "America's Best Colleges" offers a wealth of factual, specific, and objective information about more than 1,400 colleges and universities—a nice departure from the bland, cheery, and vaguely propagandistic brochures and videos that most of those institutions send out through the mail. Until U.S. News began publishing the rankings, in fact, many schools didn't collect this kind of information systematically; and even if they did, few were willing to make it public. By creating an incentive for institutions to provide reliable data about graduation rates, selectivity, and the like, U.S. News has helped to demystify the admissions process and to create a common vocabulary for parents, applicants, college counselors, and universities themselves. (Indeed, it's a testament to U.S. News's influence that the Department of Education now actually mandates that schools report to the federal government much of the data that the magazine requires for its rankings.) Rankings can help many high school seniors gauge their chances with certain schools, and may even lead them to discover institutions that they or their overworked guidance counselors had never heard of. Brian Kelly, the magazine's executive editor, says that U.S. News "levels the playing field for a kid in El Paso, who's heard of Rice but never knew that his SAT scores were good enough to get into Amherst." Of all the college-guide publishers, U.S. News is arguably the one that collects data the most rigorously and is most open about how it arrives at its final numbers.

But for most critics of rankings, the integrity of the data from U.S. News and its imitators is not the issue. What they object to is how colleges, along with students and parents, put the data to use. Kelly explains that the rankings are merely a "way for people to make a preliminary selection of schools they think might be appropriate for them." However, although the magazine does take care to remind readers that the college experience "cannot be reduced to mere numbers," it also insists that the statistics it provides are "widely accepted indicators of excellence." According to critics, what U.S. News-style data actually measure is something different: an institution's wealth in resources—from smart students to accomplished faculty members to large endowments. The logic is that "lots of resources, plus selective admissions, equals 'excellence' in undergraduate education," as Ernest T. Pascarella, a professor and education researcher at the University of Iowa, wrote in one widely read 2001 critique, published in Change magazine. Rather than talking of America's "Best Colleges," he suggested, college rankings should be called "America's Most Advantaged Colleges."

The distinction may not seem immediately meaningful (if a school is more advantaged, isn't my kid?), so it's worth looking more closely at some of the specific measures. Take faculty resources, which for national universities and liberal-arts colleges account for 20 percent of an institution's total score. (I've referred throughout to the 2003 edition.) Under the U.S. News formula a school that primarily hires full-time professors with the highest degrees in their fields and pays them handsomely scores above a school that relies more on lower-paid, part-time professors without the highest degrees. The thinking here seems plausible enough: the higher-paid professor is more likely than the lower-paid one to have an impressive curriculum vitae and be a good teacher, and a full-time professor has more time to teach and prepare for classes than a harried adjunct.

But in practice the things that make a professor well known in his field—published articles, groundbreaking research—must compete for his time and attention with teaching obligations. Few schools reward their faculty members for being good classroom teachers; it is universal, however, that a scholar's prospects for tenure and other advancement suffer if he or she doesn't publish frequently enough. Consequently, notes Alexander Astin, a widely respected scholar of learning and the longtime head of UCLA's Higher Education Research Institute, there's actually "an inverse relationship between how much contact students have with faculty and how much professors publish." In fact, famous professors may not teach much at all, leaving the work to graduate students. Not surprisingly, Astin's research shows that students at the larger and more elite institutions—that is, the institutions better able to lure high-priced academic talent—tend to have "less satisfaction with relationships with faculty and less satisfaction with teaching." In other words, a university may well be rich in faculty resources and poor in actual teaching.

How about schools that are rich in talented students? Student selectivity counts for 15 percent of a school's U.S. News rank, and to some extent this is justified. Broadly speaking, SAT or ACT scores and high school class rank—which account for most of this category—are together a good indicator of the academic ability of an incoming student. Other things being equal, educators say, it's better to go to a school with lots of smart people—and a more selective school will generally provide a more enriching peer group than a less selective one. But a school's selectivity does not necessarily reflect the quality of the education it offers. Research on learning shows that the highest-achieving students—the ones who have proved over a lifetime that they are "good at school"—will probably thrive intellectually regardless of how effective their school's teachers are or how much money the school spends, so it's hard to tease out how much credit an institution deserves for its students' college achievement. To use a medical analogy, judging the educational excellence of an institution that admits only high-achieving students is rather like judging the competence of a doctor who sees only robustly healthy patients.

How about schools that are rich, period? On the whole, such schools can spend more money on their students and score better in the "financial resources" category—which measures spending on things such as faculty salaries, libraries and other forms of academic support, and student counseling—than schools with tiny endowments. The catch is that a high level of per-student spending does not necessarily translate into, say, a high level of per-student learning. Just as a more highly paid faculty member may not be a better teacher, a more highly paid student counselor may not be a better adviser. It's the difference between having a well-stocked library and knowing whether your students actually read a lot of books; that is, how often and how effectively do students use the resources provided? Researchers say that the existence of resources—library resources, lab resources, human resources—is a necessary but not, in and of itself, a sufficient condition for learning. "Donations and endowments and the like are a tremendous asset for institutions," says Gary Pike, who runs the office of institutional research at Mississippi State University. "But the link between that and the quality of a student's education is pretty tenuous."

Then, too, the schools that can spend the most per student are generally the ones that raise the most money. But a school's proficiency at "growing" its endowment or attracting grants does not automatically lead to better teaching and more learning. The alumni-giving rate, for example, is often used in rankings as a rough measure of how well a school meets the needs of its students: if the alumni donate to their alma mater at a high rate, the thinking goes, they must be pretty well satisfied with the education and services the school provided. But the opinion of people years or even decades past graduation day doesn't tell us much about how today's students are doing, and may well be more closely related to the football team's current win-loss record than to satisfaction with the educational experience. In practice the schools with the best alumni-giving rates are usually those with the most aggressive development offices. Princeton, which has the highest alumni-giving rate of any college in the country (64 percent, according to the 2003 rankings), is probably the only one that hosts annual reunions for every class and publishes an alumni magazine every other week during the school year. Although organized annual giving campaigns are common at elite private schools, they are less so at public universities or local colleges—and this may or may not reflect on the quality of the education offered by the latter institutions.

The fact that faculty, student, and financial resources don't necessarily correlate with high levels of learning also undercuts the most important of the U.S. News measures: peer assessment, which, at 25 percent, is the single largest component of a school's overall score. In contrast with the other elements of the ranking formula, peer assessment is purely subjective: university presidents, provosts, and admissions officers are simply asked to rate each school on a scale of 1 to 5. In one sense, then, rankings have merely made explicit the perceptions of prestige and quality that existed among educators long before anyone tried to record them. But it turns out that university officials tend to base their assessment of "reputation" on an institution's wealth in resources. Writing in The Washington Monthly two years ago, Nicholas Thompson, a journalist, and Amy Graham, U.S. News's former director of data research, found that a high reputation score in U.S. News correlated "much more closely with high per-faculty federal research and development expenditures than with high faculty-student ratios or good graduation-rate performance, the magazine's best measures of undergraduate learning."

On the whole, rich, prestigious, research-oriented universities are assumed, rightly or wrongly, to provide a better education than other schools. Therefore, university administrators, especially those at middle-tier schools looking to build a better reputation, are devoting increasing amounts of time and money to improving the things that build prestige, whether or not those things improve the educational experience of the undergraduates the institution is meant to serve. Department chairs work to lure well-known research scientists, because a "hot" department attracts other stars, along with a bigger cut of research funding and talented graduate students. Financial-aid offices shift money from need-based to merit scholarships, hoping to tempt a few elite students to attend their institutions—thereby improving selectivity. (And a higher selectivity ranking tends to boost applications, raising next year's score even higher.) Some schools may even spend millions of dollars to build a more competitive athletics program. If the gamble pays off, they'll end up with a winning football team, which in turn can help build a school's buzz, attract more applications, and inspire alumni to open their wallets. But a high-priced coach, a brand-new stadium, and a state-of-the-art weight room won't help students learn. "The pursuit of prestige is expensive and risky," the economists Charles A. Goldman, Susan M. Gates, and Dominic J. Brewer concluded in a landmark RAND Corporation study published in 2001. "A college may make large investments, often placing tremendous strain on its financial health, yet neglect the needs of undergraduate students and other 'customers' ... who don't contribute to its prestige."

or several years now there has been a strong consensus among top education researchers that many colleges, partly because of rankings, are focusing too little on what really matters—what goes on in the classroom. Russell Edgerton, who served as the president of the American Association of Higher Education for nineteen years and is now the director of the Pew Forum on Undergraduate Learning, recently summarized it for me this way: "We knew that who students were and where they went to college mattered less than what they did when they got there." To put it in academic terms, much of what the rankings measure is "inputs," or the resources that lay a foundation for a student's education—things such as class size, per-student spending, and a student's own achievement level. The closest the rankings come to gauging what happens during college is to measure a handful of "outputs"—that is, what comes out at the other end of the college experience. One example of this is graduation rate performance. But that category, which measures the effects of a college's programs and policies on its students' graduation rate, counts for only five percent of a school's score, whereas input-related measures are worth almost half. In fairness, part of the reason that college guides focus so much on inputs is that measuring students' experience once they get to college—especially inside the classroom—is a much subtler and more invasive process; and for many years the infrastructure and resources needed to do it simply didn't exist. An internal critique, commissioned by U.S. News in 1997 and performed by the National Opinion Research Center, concluded as much: although the magazine should find some way to measure "student experiences and curriculum," the center reported, there was "no way in which such data could be collected at a scale that would be statistically defensible but not at the same time bankrupt U.S. News."

It was in an effort to solve the difficulties of measuring students' college experience that in February of 1998 Edgerton invited a dozen or so college presidents and leading education analysts to meet in Philadelphia. Soft-spoken, articulate, and widely respected within the higher-education community, Edgerton had long been concerned about what he calls "the ground rules of higher education"—the set of incentives, including but not limited to college rankings, that were driving colleges and universities to compete for prestige. During the following months his working group discussed how it could improve the existing rankings or, if that failed, create a new, more meaningful standard of quality. "We felt that if we could open up a new source of evidence," he told me recently, "not about resources and reputation but about whether or not universities were using those resources to improve student learning, we could make something happen."

There was already a vast empirical literature on "student engagement," which, broadly speaking, means how effectively students use the resources at their disposal. Decades' worth of research had suggested that certain behaviors promoted higher levels of engagement, and several studies published in the late 1980s and early 1990s had synthesized and distilled that research into a set of recommendations for specific institutional practices that would help encourage those behaviors. (Ernest T. Pascarella and Patrick T. Terenzini's 1991 report How College Affects Students, for instance, reviewed more than 2,600 studies spanning twenty years.) But Edgerton and his group realized that to reform or replace existing rankings—notably U.S. News's—they would have to surmount two practical challenges. One was to design a methodologically sound survey that would elicit reliable data about whether schools were engaged in the right practices, without costing much for those schools to take part in. The other challenge was to find enough money to launch the survey nationally—that is, to be as comprehensive about measuring learning as U.S. News already was about measuring resources. The first problem was solved through a pair of pilot projects led by Peter Ewell, an expert on institutional assessment, and conducted during the spring and fall of 1999, with help from experts in survey research. The second problem was solved by Pew, which awarded Indiana University's Center for Postsecondary Planning and Research, headed by George Kuh, a $3.3 million grant to get the project off the ground.

n 2000 Kuh and his staff launched what was dubbed the National Survey of Student Engagement. Research indicated that "engagement" could be reliably measured by surveying students themselves. So instead of being sent to college presidents, provosts, and admissions offices, nsse (pronounced Nessy) is distributed to students. Part of the survey is devoted simply to finding out how often students perform certain tasks. NSSE asks how often students write papers and how long those papers are, so the answers reflect not only the overall amount that students are writing but also whether teachers at a school tend to assign one big paper at the end of a course or a series of shorter papers spread throughout the semester. It asks students how often they talk with faculty members inside and outside of class, and about what (grades? assignments? career plans?). And, yes, it asks how often students read books—not only for class but also "for personal enjoyment or academic enrichment."

NSSE also questions students directly about particular services their institutions offer. Instead of simply measuring how much a school spends on advising, for instance, it asks students how much they benefit from it. In a similar vein, one set of questions asks whether students have done or plan to do certain things before graduation, such as take an internship, study abroad, or work on a research project with a professor; studies show that participation in one or more of them has a variety of desirable effects on engagement. Other questions relate to how students divide their week among different tasks. Not surprisingly, preparing for class, working on campus, and taking part in extracurricular activities all tend to improve engagement, whereas time spent working off campus, commuting, or caring for dependents does not. Certain parts of NSSE seek to delve into deeper kinds of engagement. One series of questions, for instance, measures what educators call "integration"—whether students are applying the knowledge they've learned. Students are asked how often they have worked on a paper or a project that required incorporating ideas or information from different sources; how often they've acknowledged diverse perspectives—different racial viewpoints, political beliefs, and so forth—in class discussions or writing assignments; and how often they've discussed ideas from readings or classes with people outside of class. The more students do these things, the better they learn and the greater the likelihood that what they're learning is transferable to other settings.

NSSE has caught on quickly. In the first year 276 institutions took part in the survey; this year the number was up to 431. NSSE encourages institutions to participate in the survey every few years. Today more than 730 schools, representing 58 percent of undergraduate enrollment at four-year institutions in the United States, are in the NSSE database. This year NSSE has surveyed about 150,000 students—by most standards an extraordinarily large sample size. (A major Gallup poll, by comparison, surveys about 5,000 people.) Schools taking part in the survey run the gamut from big to small, public to private, urban to suburban, selective to less selective. Even U.S. News has shown interest in the work of Kuh's center. For last year's edition of "America's Best Colleges" the magazine published some NSSE data (provided by a small selection of schools) on its Web site and in a section of the spinoff college guide, under the title "Seniors Have Their Say." The main holdouts, unsurprisingly, are the few dozen most selective and prestigious schools in the United States—that is, the ones that already do well on measures of reputation and resources. (Some of them no doubt worry about whether they would also do well on measures of engagement. A recent study by Gary Pike of fourteen research universities that are in the NSSE database found almost no correlation between the institutions' scores on NSSE measures and their scores on U.S. News's "indicators of excellence.")

Part of what accounts for NSSE's growth is that the survey is actually useful to schools. Taken together, the survey questions elicit information about five proven "benchmarks of effective educational practices": how academically challenging a school is, how much students interact with faculty members, how supportive of learning the campus environment is, how well the institution promotes "active and collaborative learning," and whether the school offers "enriching educational experiences." Kuh and his team plug in a particular institution's characteristics (its size, the academic ability of its incoming students, and the proportion who attend full time), to calculate how well it should do on those benchmarks, and provide schools with a report that grades them on how well they actually do. Schools that underperform in one area or another then have some sense of where they need to improve. Kuh also lets each institution pick a small group of other schools it considers peers; NSSE then shoots back the aggregate scores of that peer group. (Some schools also swap their survey results directly.) The idea is to get schools to reorient themselves toward improvements that can make a measurable difference in the education of their undergraduates. "Comparing almost any institution against a national number isn't very helpful," Kuh says, "especially if you're trying to mobilize a set of faculty members around thinking about how they might improve the undergraduate program."

Although the vast majority of schools cannot easily improve their performance in terms of input measures (they can't build endowments from scratch, or simply attract twice as many National Merit Scholars for next year's freshman class), they can influence the experiences of their students. Among the several college presidents I spoke with about NSSE was Peter Smith, a higher-education veteran (and onetime member of Congress) who in 1994 helped to launch a branch of the California State University system in Monterey Bay. As schools go, CSUMB is relatively small, unselective, and unknown. "So when we started," Smith told me recently, "one of the first questions we were faced with was 'How is anyone going to know if you're good?'" He saw NSSE as a way to answer that question, and when the survey was launched three years ago, he eagerly signed up.

Today, NSSE measures the school both against its own past performance and against other state universities in California, and the results have already helped Smith and his colleagues to identify areas needing improvement. One problem, they discovered, was the campus environment. Situated on a former military base, the college "looks like the set of On the Beach after the bomb dropped," as Smith put it. "It's a pretty lonely place." So the school set out to improve its physical amenities; it has built an aquatic center and plans to knock down some of the abandoned and unused buildings that give the campus its ghost-town feel. The survey also confirmed something that administrators had suspected but weren't previously able to measure: despite a generally challenging and rewarding academic environment, freshmen and sophomores weren't connecting with those faculty members who would serve as advisers after the students declared a major. As a result students were having trouble making the transition from sophomore to junior year, and some were leaving school. These days, students must investigate various areas of study during their freshman year and plan a course of study that will qualify them for a major; during sophomore year everyone is assigned a major adviser to track his or her progress. "The single most important thing about the NSSE data," Smith said, "is that it takes important elements of your operations and it gives you hard data, very clear information, about how you're doing—removing it from hunch, opinion, and philosophy, all the lenses through which most conversations about higher education are conducted. It changes everything." Today CSUMB is the rare college that actually posts its NSSE results on the Web. "Good news or bad, we want people to know it," Smith said. "Because that's going to spur us to do better."

That said, NSSE isn't perfect. The chief complaint—made by, among others, UCLA's Alexander Astin, who sits on NSSE's board as what he calls a "friendly critic"—is that the survey doesn't really measure "added value." NSSE is administered to the freshmen and seniors at a given institution in a given year, rather than longitudinally, to the same set of students over time. So although it does measure whether a school is generally doing the right things, it's hard to tell from the results how much a school is helping students to learn. "NSSE is certainly a step above what U.S. News provides as far as what matters," says Linda Sax, a UCLA professor who runs the nationwide College Freshman Survey. "It descriptively tells us how engaged students are at an institution. But it doesn't allow us to assess institutional effectiveness. It doesn't tell the whole story." George Kuh and Peter Ewell both acknowledge that this is a drawback in NSSE's approach, one that they hope to address over time. A NSSE-size survey done longitudinally, they point out, would have been even harder to get off the ground, so they opted for something simpler, if less comprehensive.

A more practical drawback is that NSSE doesn't make its results available to the public. Although the survey was originally conceived as the basis for an alternative to the rankings, schools proved unwilling to take part in it unless they alone could decide whether to reveal their results. This means that although it can be a useful self-improvement tool for institutions, applicants and parents don't have access to NSSE data from most schools. Neither does U.S. News. For this fall's edition only about seven percent of the schools rated by the magazine allowed it to use their NSSE data—which is one reason why "America's Best Colleges" doesn't integrate the survey data into the rankings proper. And U.S. News uses what NSSE data it does get from colleges only sparingly: of the ninety-plus questions on the survey, the 2003 college guide included responses to fifteen.

Still, there is a way for college applicants and their parents to become familiar with the kinds of information NSSE is designed to measure. They can, for instance, ask admissions offices and campus tour guides the same kinds of questions that Kuh and his team put to institutions as a whole: How much contact do students have with professors? How often do they write papers? How do students receive help selecting classes? What do students like about the campus? How many undergraduates study abroad? (The survey helpfully publishes a pocket-guide checklist of questions to ask while visiting campuses.) As schools begin to sense a demand from their customers for NSSE-type information, they'll feel pressure to start providing it. Indeed, a similar sort of process is already under way within the higher-education community. A growing number of accreditation organizations now encourage schools to take part in surveys like NSSE and report back the results; likewise, state university systems have begun to use NSSE to assess their constituent schools. Over time, as its influence grows, the survey can become what Edgerton calls a tool of "soft accountability," pushing schools to cultivate practices that are known to improve education.

Kuh and his colleagues may never get elite schools to participate in NSSE. But in a certain sense they don't really need to. Whereas the smartest, highest-ability students are not all that dependent on institutions to get them to learn, colleges can make a great difference for students of average or lower academic ability. Inevitably, the vast majority of such students will attend institutions lacking the prestige and wealth of a Harvard or a Yale. But if NSSE succeeds, those students will not lack for a good education.

Copyright © 2003 The Atlantic Monthly




No comments:

Post a Comment

☛ STOP!!! Read the following BEFORE posting a Comment!

Include your e-mail address with your comment or your comment will be deleted by default. Your e-mail address will be DELETED before the comment is posted to this blog. Comments to entries in this blog are moderated by the blogger. Violators of this rule can KMA (Kiss My A-Double-Crooked-Letter) as this blogger's late maternal grandmother would say. No e-mail address (to be verified AND then deleted by the blogger) within the comment, no posting. That is the (fair & balanced) rule for comments to this blog.