Tuesday, November 18, 2003

What Makes A College Good?


Tom Terrific wrote from Madison, WI touting an article in the December issue of Atlantic Monthly. Drat! The (Fair & Balanced) Blogger went to the AM Web site and the online content doesn't include articles from the December 2003 issue just this blurb:

The Bubble of American Supremacy
by George Soros

A prominent financier argues that the heedless assertion of American power in the world resembles a financial bubble—and the moment of truth may be here. "The dominant position the United States occupies in the world," he writes, "is the element of reality that is being distorted. The proposition that the United States will be better off if it uses its position to impose its values and interests everywhere is the misconception. It is exactly by not abusing its power that America attained its current position."


The Atlantic Monthly Web site did contain something else, though. If this be (fair & balanced) envy, so be it.





[x Atlantic Monthly]
What Makes A College Good?
A new survey seeks to get behind the well-publicized—and much criticized—college rankings and measure schools by how good a job they do of actually educating their students
by
Nicholas Confessore

This fall some two million high school seniors will apply to one of the thousands of accredited colleges and universities in the United States. It will prove to be the most daunting, anxiety-inducing experience many of them have yet had. The fact that most colleges today are more selective than they were a decade ago has filled the college-admissions process with a sense of risk and scarcity—and that in turn has driven a steady increase in the number of schools to which seniors typically apply. Ever rising prices mean that a college education can be as expensive as a starter home. But instead of bricks and mortar one is buying something that is intangible and yet, seemingly at least, life-determining: among some parents there's a strong belief that failure to attend a name-brand school will cut their children off from a bright future. So students are under great pressure to find—and get into—the "right" schools. With the average public school college counselor laboring under a workload of about 500 students, consumers of higher education must rely on word of mouth, on their impressions from college visits, on name recognition—and on some of the hundred or more college guides and rankings that are published each year, from the encyclopedic Fiske Guide to Colleges to the Princeton Review's The Best 351 Colleges.

The most widely read of these is U.S. News & World Report's "America's Best Colleges," a regular issue of the magazine that was first published in 1983 and today reaches an audience of nearly 11 million people. U.S. News pioneered and largely legitimated the idea of "objective" comparative measures of a school's quality—an idea that has come to permeate the higher-education culture. Colleges pay attention to rankings because a higher ranking one year can bring a flood of new applicants the next, whereas a lower ranking can cause a falloff. Prospective students and their parents pay attention because U.S. News-style rank seems a fair way to gauge whether a school would give them their money's worth. As Steve Goodman, a private education consultant based in Washington, D.C., told me recently, "They say, 'I'm willing to take a second mortgage out for a school I've heard about that I presume is of good quality, but not for one that I've never heard of that may or may not offer a quality education.'"

For education analysts, teachers, and a handful of outspoken university presidents, however, the growing influence of college rankings has for years been a source of deep concern. They believe that rankings not only have distorted the admissions process but are symptomatic of a broader corruption of American universities: administrators, they say, have reshaped their institutions to pursue goals that may not aid—in fact, may actively subvert—the purpose of higher education. Not until the 1990s, when college guides became a growth industry, did it really dawn on critics that college rankings were also providing kids and their parents with something desirable: reliable hard data that could be used to compare a wide array of schools and pick one out of the clutter. To reduce the relevance of one sort of ranking the critics would have to provide another: an alternative measure of educational quality based on a new standard to which institutions could aspire. They would, in other words, have to find a different way to answer the basic question faced by so many high school seniors and their families each year: What makes a college good?

hanges in the U.S. News ranking system since its origin twenty years ago suggest how powerful the demand for hard data about colleges has become. The first U.S. News ranking divided schools into broad categories and asked university presidents to rate the best schools within their peer group. That is, it was basically a popularity contest—or, as the magazine called it, a "reputational survey." Most colleges ignored the results, but the issue was snatched up from newsstands, and U.S. News published the rankings again in 1985 and 1987. The following year the magazine decided to launch them as an annual feature, bundled with stories about higher education and titled "America's Best Colleges." To make the rankings less subjective, U.S. News also began to gather a wide variety of figures. For the past fifteen years the areas in which colleges are measured have remained roughly constant: along with peer assessment they include retention, which counts both a college's graduation rate and the percentage of freshmen who return the following year; faculty resources, including such statistics as student-faculty ratio and average faculty salaries; student selectivity, which factors in a school's acceptance rate along with the SAT or ACT scores and high school class ranks of the students who enroll; an institution's per-student spending; and the alumni-giving rate. Another measure, called graduation-rate performance (which compares a school's expected and actual graduation rates), was added in 1996.

All told, "America's Best Colleges" offers a wealth of factual, specific, and objective information about more than 1,400 colleges and universities—a nice departure from the bland, cheery, and vaguely propagandistic brochures and videos that most of those institutions send out through the mail. Until U.S. News began publishing the rankings, in fact, many schools didn't collect this kind of information systematically; and even if they did, few were willing to make it public. By creating an incentive for institutions to provide reliable data about graduation rates, selectivity, and the like, U.S. News has helped to demystify the admissions process and to create a common vocabulary for parents, applicants, college counselors, and universities themselves. (Indeed, it's a testament to U.S. News's influence that the Department of Education now actually mandates that schools report to the federal government much of the data that the magazine requires for its rankings.) Rankings can help many high school seniors gauge their chances with certain schools, and may even lead them to discover institutions that they or their overworked guidance counselors had never heard of. Brian Kelly, the magazine's executive editor, says that U.S. News "levels the playing field for a kid in El Paso, who's heard of Rice but never knew that his SAT scores were good enough to get into Amherst." Of all the college-guide publishers, U.S. News is arguably the one that collects data the most rigorously and is most open about how it arrives at its final numbers.

But for most critics of rankings, the integrity of the data from U.S. News and its imitators is not the issue. What they object to is how colleges, along with students and parents, put the data to use. Kelly explains that the rankings are merely a "way for people to make a preliminary selection of schools they think might be appropriate for them." However, although the magazine does take care to remind readers that the college experience "cannot be reduced to mere numbers," it also insists that the statistics it provides are "widely accepted indicators of excellence." According to critics, what U.S. News-style data actually measure is something different: an institution's wealth in resources—from smart students to accomplished faculty members to large endowments. The logic is that "lots of resources, plus selective admissions, equals 'excellence' in undergraduate education," as Ernest T. Pascarella, a professor and education researcher at the University of Iowa, wrote in one widely read 2001 critique, published in Change magazine. Rather than talking of America's "Best Colleges," he suggested, college rankings should be called "America's Most Advantaged Colleges."

The distinction may not seem immediately meaningful (if a school is more advantaged, isn't my kid?), so it's worth looking more closely at some of the specific measures. Take faculty resources, which for national universities and liberal-arts colleges account for 20 percent of an institution's total score. (I've referred throughout to the 2003 edition.) Under the U.S. News formula a school that primarily hires full-time professors with the highest degrees in their fields and pays them handsomely scores above a school that relies more on lower-paid, part-time professors without the highest degrees. The thinking here seems plausible enough: the higher-paid professor is more likely than the lower-paid one to have an impressive curriculum vitae and be a good teacher, and a full-time professor has more time to teach and prepare for classes than a harried adjunct.

But in practice the things that make a professor well known in his field—published articles, groundbreaking research—must compete for his time and attention with teaching obligations. Few schools reward their faculty members for being good classroom teachers; it is universal, however, that a scholar's prospects for tenure and other advancement suffer if he or she doesn't publish frequently enough. Consequently, notes Alexander Astin, a widely respected scholar of learning and the longtime head of UCLA's Higher Education Research Institute, there's actually "an inverse relationship between how much contact students have with faculty and how much professors publish." In fact, famous professors may not teach much at all, leaving the work to graduate students. Not surprisingly, Astin's research shows that students at the larger and more elite institutions—that is, the institutions better able to lure high-priced academic talent—tend to have "less satisfaction with relationships with faculty and less satisfaction with teaching." In other words, a university may well be rich in faculty resources and poor in actual teaching.

How about schools that are rich in talented students? Student selectivity counts for 15 percent of a school's U.S. News rank, and to some extent this is justified. Broadly speaking, SAT or ACT scores and high school class rank—which account for most of this category—are together a good indicator of the academic ability of an incoming student. Other things being equal, educators say, it's better to go to a school with lots of smart people—and a more selective school will generally provide a more enriching peer group than a less selective one. But a school's selectivity does not necessarily reflect the quality of the education it offers. Research on learning shows that the highest-achieving students—the ones who have proved over a lifetime that they are "good at school"—will probably thrive intellectually regardless of how effective their school's teachers are or how much money the school spends, so it's hard to tease out how much credit an institution deserves for its students' college achievement. To use a medical analogy, judging the educational excellence of an institution that admits only high-achieving students is rather like judging the competence of a doctor who sees only robustly healthy patients.

How about schools that are rich, period? On the whole, such schools can spend more money on their students and score better in the "financial resources" category—which measures spending on things such as faculty salaries, libraries and other forms of academic support, and student counseling—than schools with tiny endowments. The catch is that a high level of per-student spending does not necessarily translate into, say, a high level of per-student learning. Just as a more highly paid faculty member may not be a better teacher, a more highly paid student counselor may not be a better adviser. It's the difference between having a well-stocked library and knowing whether your students actually read a lot of books; that is, how often and how effectively do students use the resources provided? Researchers say that the existence of resources—library resources, lab resources, human resources—is a necessary but not, in and of itself, a sufficient condition for learning. "Donations and endowments and the like are a tremendous asset for institutions," says Gary Pike, who runs the office of institutional research at Mississippi State University. "But the link between that and the quality of a student's education is pretty tenuous."

Then, too, the schools that can spend the most per student are generally the ones that raise the most money. But a school's proficiency at "growing" its endowment or attracting grants does not automatically lead to better teaching and more learning. The alumni-giving rate, for example, is often used in rankings as a rough measure of how well a school meets the needs of its students: if the alumni donate to their alma mater at a high rate, the thinking goes, they must be pretty well satisfied with the education and services the school provided. But the opinion of people years or even decades past graduation day doesn't tell us much about how today's students are doing, and may well be more closely related to the football team's current win-loss record than to satisfaction with the educational experience. In practice the schools with the best alumni-giving rates are usually those with the most aggressive development offices. Princeton, which has the highest alumni-giving rate of any college in the country (64 percent, according to the 2003 rankings), is probably the only one that hosts annual reunions for every class and publishes an alumni magazine every other week during the school year. Although organized annual giving campaigns are common at elite private schools, they are less so at public universities or local colleges—and this may or may not reflect on the quality of the education offered by the latter institutions.

The fact that faculty, student, and financial resources don't necessarily correlate with high levels of learning also undercuts the most important of the U.S. News measures: peer assessment, which, at 25 percent, is the single largest component of a school's overall score. In contrast with the other elements of the ranking formula, peer assessment is purely subjective: university presidents, provosts, and admissions officers are simply asked to rate each school on a scale of 1 to 5. In one sense, then, rankings have merely made explicit the perceptions of prestige and quality that existed among educators long before anyone tried to record them. But it turns out that university officials tend to base their assessment of "reputation" on an institution's wealth in resources. Writing in The Washington Monthly two years ago, Nicholas Thompson, a journalist, and Amy Graham, U.S. News's former director of data research, found that a high reputation score in U.S. News correlated "much more closely with high per-faculty federal research and development expenditures than with high faculty-student ratios or good graduation-rate performance, the magazine's best measures of undergraduate learning."

On the whole, rich, prestigious, research-oriented universities are assumed, rightly or wrongly, to provide a better education than other schools. Therefore, university administrators, especially those at middle-tier schools looking to build a better reputation, are devoting increasing amounts of time and money to improving the things that build prestige, whether or not those things improve the educational experience of the undergraduates the institution is meant to serve. Department chairs work to lure well-known research scientists, because a "hot" department attracts other stars, along with a bigger cut of research funding and talented graduate students. Financial-aid offices shift money from need-based to merit scholarships, hoping to tempt a few elite students to attend their institutions—thereby improving selectivity. (And a higher selectivity ranking tends to boost applications, raising next year's score even higher.) Some schools may even spend millions of dollars to build a more competitive athletics program. If the gamble pays off, they'll end up with a winning football team, which in turn can help build a school's buzz, attract more applications, and inspire alumni to open their wallets. But a high-priced coach, a brand-new stadium, and a state-of-the-art weight room won't help students learn. "The pursuit of prestige is expensive and risky," the economists Charles A. Goldman, Susan M. Gates, and Dominic J. Brewer concluded in a landmark RAND Corporation study published in 2001. "A college may make large investments, often placing tremendous strain on its financial health, yet neglect the needs of undergraduate students and other 'customers' ... who don't contribute to its prestige."

or several years now there has been a strong consensus among top education researchers that many colleges, partly because of rankings, are focusing too little on what really matters—what goes on in the classroom. Russell Edgerton, who served as the president of the American Association of Higher Education for nineteen years and is now the director of the Pew Forum on Undergraduate Learning, recently summarized it for me this way: "We knew that who students were and where they went to college mattered less than what they did when they got there." To put it in academic terms, much of what the rankings measure is "inputs," or the resources that lay a foundation for a student's education—things such as class size, per-student spending, and a student's own achievement level. The closest the rankings come to gauging what happens during college is to measure a handful of "outputs"—that is, what comes out at the other end of the college experience. One example of this is graduation rate performance. But that category, which measures the effects of a college's programs and policies on its students' graduation rate, counts for only five percent of a school's score, whereas input-related measures are worth almost half. In fairness, part of the reason that college guides focus so much on inputs is that measuring students' experience once they get to college—especially inside the classroom—is a much subtler and more invasive process; and for many years the infrastructure and resources needed to do it simply didn't exist. An internal critique, commissioned by U.S. News in 1997 and performed by the National Opinion Research Center, concluded as much: although the magazine should find some way to measure "student experiences and curriculum," the center reported, there was "no way in which such data could be collected at a scale that would be statistically defensible but not at the same time bankrupt U.S. News."

It was in an effort to solve the difficulties of measuring students' college experience that in February of 1998 Edgerton invited a dozen or so college presidents and leading education analysts to meet in Philadelphia. Soft-spoken, articulate, and widely respected within the higher-education community, Edgerton had long been concerned about what he calls "the ground rules of higher education"—the set of incentives, including but not limited to college rankings, that were driving colleges and universities to compete for prestige. During the following months his working group discussed how it could improve the existing rankings or, if that failed, create a new, more meaningful standard of quality. "We felt that if we could open up a new source of evidence," he told me recently, "not about resources and reputation but about whether or not universities were using those resources to improve student learning, we could make something happen."

There was already a vast empirical literature on "student engagement," which, broadly speaking, means how effectively students use the resources at their disposal. Decades' worth of research had suggested that certain behaviors promoted higher levels of engagement, and several studies published in the late 1980s and early 1990s had synthesized and distilled that research into a set of recommendations for specific institutional practices that would help encourage those behaviors. (Ernest T. Pascarella and Patrick T. Terenzini's 1991 report How College Affects Students, for instance, reviewed more than 2,600 studies spanning twenty years.) But Edgerton and his group realized that to reform or replace existing rankings—notably U.S. News's—they would have to surmount two practical challenges. One was to design a methodologically sound survey that would elicit reliable data about whether schools were engaged in the right practices, without costing much for those schools to take part in. The other challenge was to find enough money to launch the survey nationally—that is, to be as comprehensive about measuring learning as U.S. News already was about measuring resources. The first problem was solved through a pair of pilot projects led by Peter Ewell, an expert on institutional assessment, and conducted during the spring and fall of 1999, with help from experts in survey research. The second problem was solved by Pew, which awarded Indiana University's Center for Postsecondary Planning and Research, headed by George Kuh, a $3.3 million grant to get the project off the ground.

n 2000 Kuh and his staff launched what was dubbed the National Survey of Student Engagement. Research indicated that "engagement" could be reliably measured by surveying students themselves. So instead of being sent to college presidents, provosts, and admissions offices, nsse (pronounced Nessy) is distributed to students. Part of the survey is devoted simply to finding out how often students perform certain tasks. NSSE asks how often students write papers and how long those papers are, so the answers reflect not only the overall amount that students are writing but also whether teachers at a school tend to assign one big paper at the end of a course or a series of shorter papers spread throughout the semester. It asks students how often they talk with faculty members inside and outside of class, and about what (grades? assignments? career plans?). And, yes, it asks how often students read books—not only for class but also "for personal enjoyment or academic enrichment."

NSSE also questions students directly about particular services their institutions offer. Instead of simply measuring how much a school spends on advising, for instance, it asks students how much they benefit from it. In a similar vein, one set of questions asks whether students have done or plan to do certain things before graduation, such as take an internship, study abroad, or work on a research project with a professor; studies show that participation in one or more of them has a variety of desirable effects on engagement. Other questions relate to how students divide their week among different tasks. Not surprisingly, preparing for class, working on campus, and taking part in extracurricular activities all tend to improve engagement, whereas time spent working off campus, commuting, or caring for dependents does not. Certain parts of NSSE seek to delve into deeper kinds of engagement. One series of questions, for instance, measures what educators call "integration"—whether students are applying the knowledge they've learned. Students are asked how often they have worked on a paper or a project that required incorporating ideas or information from different sources; how often they've acknowledged diverse perspectives—different racial viewpoints, political beliefs, and so forth—in class discussions or writing assignments; and how often they've discussed ideas from readings or classes with people outside of class. The more students do these things, the better they learn and the greater the likelihood that what they're learning is transferable to other settings.

NSSE has caught on quickly. In the first year 276 institutions took part in the survey; this year the number was up to 431. NSSE encourages institutions to participate in the survey every few years. Today more than 730 schools, representing 58 percent of undergraduate enrollment at four-year institutions in the United States, are in the NSSE database. This year NSSE has surveyed about 150,000 students—by most standards an extraordinarily large sample size. (A major Gallup poll, by comparison, surveys about 5,000 people.) Schools taking part in the survey run the gamut from big to small, public to private, urban to suburban, selective to less selective. Even U.S. News has shown interest in the work of Kuh's center. For last year's edition of "America's Best Colleges" the magazine published some NSSE data (provided by a small selection of schools) on its Web site and in a section of the spinoff college guide, under the title "Seniors Have Their Say." The main holdouts, unsurprisingly, are the few dozen most selective and prestigious schools in the United States—that is, the ones that already do well on measures of reputation and resources. (Some of them no doubt worry about whether they would also do well on measures of engagement. A recent study by Gary Pike of fourteen research universities that are in the NSSE database found almost no correlation between the institutions' scores on NSSE measures and their scores on U.S. News's "indicators of excellence.")

Part of what accounts for NSSE's growth is that the survey is actually useful to schools. Taken together, the survey questions elicit information about five proven "benchmarks of effective educational practices": how academically challenging a school is, how much students interact with faculty members, how supportive of learning the campus environment is, how well the institution promotes "active and collaborative learning," and whether the school offers "enriching educational experiences." Kuh and his team plug in a particular institution's characteristics (its size, the academic ability of its incoming students, and the proportion who attend full time), to calculate how well it should do on those benchmarks, and provide schools with a report that grades them on how well they actually do. Schools that underperform in one area or another then have some sense of where they need to improve. Kuh also lets each institution pick a small group of other schools it considers peers; NSSE then shoots back the aggregate scores of that peer group. (Some schools also swap their survey results directly.) The idea is to get schools to reorient themselves toward improvements that can make a measurable difference in the education of their undergraduates. "Comparing almost any institution against a national number isn't very helpful," Kuh says, "especially if you're trying to mobilize a set of faculty members around thinking about how they might improve the undergraduate program."

Although the vast majority of schools cannot easily improve their performance in terms of input measures (they can't build endowments from scratch, or simply attract twice as many National Merit Scholars for next year's freshman class), they can influence the experiences of their students. Among the several college presidents I spoke with about NSSE was Peter Smith, a higher-education veteran (and onetime member of Congress) who in 1994 helped to launch a branch of the California State University system in Monterey Bay. As schools go, CSUMB is relatively small, unselective, and unknown. "So when we started," Smith told me recently, "one of the first questions we were faced with was 'How is anyone going to know if you're good?'" He saw NSSE as a way to answer that question, and when the survey was launched three years ago, he eagerly signed up.

Today, NSSE measures the school both against its own past performance and against other state universities in California, and the results have already helped Smith and his colleagues to identify areas needing improvement. One problem, they discovered, was the campus environment. Situated on a former military base, the college "looks like the set of On the Beach after the bomb dropped," as Smith put it. "It's a pretty lonely place." So the school set out to improve its physical amenities; it has built an aquatic center and plans to knock down some of the abandoned and unused buildings that give the campus its ghost-town feel. The survey also confirmed something that administrators had suspected but weren't previously able to measure: despite a generally challenging and rewarding academic environment, freshmen and sophomores weren't connecting with those faculty members who would serve as advisers after the students declared a major. As a result students were having trouble making the transition from sophomore to junior year, and some were leaving school. These days, students must investigate various areas of study during their freshman year and plan a course of study that will qualify them for a major; during sophomore year everyone is assigned a major adviser to track his or her progress. "The single most important thing about the NSSE data," Smith said, "is that it takes important elements of your operations and it gives you hard data, very clear information, about how you're doing—removing it from hunch, opinion, and philosophy, all the lenses through which most conversations about higher education are conducted. It changes everything." Today CSUMB is the rare college that actually posts its NSSE results on the Web. "Good news or bad, we want people to know it," Smith said. "Because that's going to spur us to do better."

That said, NSSE isn't perfect. The chief complaint—made by, among others, UCLA's Alexander Astin, who sits on NSSE's board as what he calls a "friendly critic"—is that the survey doesn't really measure "added value." NSSE is administered to the freshmen and seniors at a given institution in a given year, rather than longitudinally, to the same set of students over time. So although it does measure whether a school is generally doing the right things, it's hard to tell from the results how much a school is helping students to learn. "NSSE is certainly a step above what U.S. News provides as far as what matters," says Linda Sax, a UCLA professor who runs the nationwide College Freshman Survey. "It descriptively tells us how engaged students are at an institution. But it doesn't allow us to assess institutional effectiveness. It doesn't tell the whole story." George Kuh and Peter Ewell both acknowledge that this is a drawback in NSSE's approach, one that they hope to address over time. A NSSE-size survey done longitudinally, they point out, would have been even harder to get off the ground, so they opted for something simpler, if less comprehensive.

A more practical drawback is that NSSE doesn't make its results available to the public. Although the survey was originally conceived as the basis for an alternative to the rankings, schools proved unwilling to take part in it unless they alone could decide whether to reveal their results. This means that although it can be a useful self-improvement tool for institutions, applicants and parents don't have access to NSSE data from most schools. Neither does U.S. News. For this fall's edition only about seven percent of the schools rated by the magazine allowed it to use their NSSE data—which is one reason why "America's Best Colleges" doesn't integrate the survey data into the rankings proper. And U.S. News uses what NSSE data it does get from colleges only sparingly: of the ninety-plus questions on the survey, the 2003 college guide included responses to fifteen.

Still, there is a way for college applicants and their parents to become familiar with the kinds of information NSSE is designed to measure. They can, for instance, ask admissions offices and campus tour guides the same kinds of questions that Kuh and his team put to institutions as a whole: How much contact do students have with professors? How often do they write papers? How do students receive help selecting classes? What do students like about the campus? How many undergraduates study abroad? (The survey helpfully publishes a pocket-guide checklist of questions to ask while visiting campuses.) As schools begin to sense a demand from their customers for NSSE-type information, they'll feel pressure to start providing it. Indeed, a similar sort of process is already under way within the higher-education community. A growing number of accreditation organizations now encourage schools to take part in surveys like NSSE and report back the results; likewise, state university systems have begun to use NSSE to assess their constituent schools. Over time, as its influence grows, the survey can become what Edgerton calls a tool of "soft accountability," pushing schools to cultivate practices that are known to improve education.

Kuh and his colleagues may never get elite schools to participate in NSSE. But in a certain sense they don't really need to. Whereas the smartest, highest-ability students are not all that dependent on institutions to get them to learn, colleges can make a great difference for students of average or lower academic ability. Inevitably, the vast majority of such students will attend institutions lacking the prestige and wealth of a Harvard or a Yale. But if NSSE succeeds, those students will not lack for a good education.

Copyright © 2003 The Atlantic Monthly




John McWhorter is a tough dude!

McWhorter was introduced to me by a closet racist who took great delight in McWhorter's dismissal of victimology as a rationale for African-American academic performance. I do not think that John McWhorter would delight in his closet racist admirer. McWhorter is a linguist and I would propose a dream team: Noam Chomsky (another tough dude) and John McWhorter in a linguistics department that would be interesting. That dean would be one lucky dude. McWhorter is self-taught in 12 languages? I am bi-ignorant! If this be (fair & balanced) slanging, so be it.

November 15, 2003
Going at the Changes in, Ya Know, English
By EMILY EAKIN


Chester Higgins Jr./The New York Times
John McWhorter of the UC-Berkeley and the Manhattan Institute.



On Dec. 8, 1941, the day after the Japanese attack on Pearl Harbor, Representative Charles A. Eaton, Republican of New Jersey, made his case in the House for why the nation should enter the Second World War.

"Mr. Speaker," his speech began, "yesterday against the roar of Japanese cannon in Hawaii our American people heard a trumpet call; a call to unity; a call to courage; a call to determination once and for all to wipe off of the earth this accursed monster of tyranny and slavery which is casting its black shadow over the hearts and homes of every land."

Last year, Senator Sam Brownback, Republican of Kansas, made the case for war in Iraq this way:

"And if we don't go at Iraq, that our effort in the war on terrorism dwindles down into an intelligence operation," he said. "We go at Iraq and it says to countries that support terrorists, there remain six in the world that are as our definition state sponsors of terrorists, you say to those countries: we are serious about terrorism, we're serious about you not supporting terrorism on your own soil."

The linguist and cultural critic John McWhorter cites these excerpts in his new book, Doing Our Own Thing: The Degradation of Language and Music and Why We Should, Like, Care (Gotham Books). They not only are typical of speeches made in Congress on both occasions, he argues, but also provide a vivid illustration of just how much the language of public discourse has deteriorated.

Riddled with sentence fragments, run-ons and colloquialisms like "go at," Senator Brownback's speech is still intelligible, but in Mr. McWhorter's view, it is emblematic of a creeping casualness that is largely to the nation's detriment.

"We in America now are an anomaly," Mr. McWhorter said over lunch at a restaurant in Midtown Manhattan this week. "We have very little sense of English as something to be dressed up. It's just this thing that comes out of our mouths. We just talk."

Mr. McWhorter, 38, a professor of linguistics at the University of California at Berkeley and a senior fellow at the Manhattan Institute, a policy research group in New York City, is hardly the first to complain about Americans' brazen disregard for their native tongue. But unlike many others, he says the problem is not an epidemic of bad grammar.

As a linguist, he says, he knows that grammatical rules are arbitrary and that in casual conversation people have never abided by them. Rather, he argues, the fault lies with the collapse of the distinction between the written and the oral. Where formal, well-honed English was once de rigueur in public life, he argues, it has all but disappeared, supplanted by the indifferent cadences of speech and ultimately impairing our ability to think.

This bleak assessment notwithstanding, Mr. McWhorter, an intense, confident and — perhaps not surprisingly — loquacious man, is not a curmudgeon or a fuddy-duddy. Nor, for that matter, a nerd, despite a résumé that bristles with intellectual precociousness.

Self-taught in 12 languages — including Russian, Swedish, Swahili, Arabic and Hebrew, which he initially took up as a Philadelphia preschooler when he was 4 — he is a respected expert in Creole languages. (In his spare time, he is compiling the first written grammar of Saramaccan, a Creole language spoken by descendants of former slaves in Suriname.)

A college graduate at 19 and a tenured professor at 33, he has published seven previous books, including the controversial best seller, Losing the Race: Self-Sabotage in Black America (The Free Press, 2000), in which he accused middle-class blacks of embracing anti-intellectualism and a cult of victimology. An African-American who is an outspoken critic of affirmative action, welfare and reparations, he has aroused the ire of many liberals and earned a reputation as a conservative.

But none of these exploits, he is at pains to show, should be taken to mean that he is not hip. His conversation is peppered with knowing allusions to pop culture — Britney Spears, Tori Amos, television sitcoms, rap and Broadway. ("I'm the world's only straight musical-theater cast-album fanatic," he joked.) An experienced bass-baritone who plays cocktail piano and has performed in amateur theatrics, he illustrated a point about contemporary English usage by singing two lines from Stephen Sondheim's new musical, "Bounce." In many ways, he insists, he is a typical product of America after the 1960's, the decade to which he dates the beginning of the nation's linguistic decline.

"I cannot recite a single poem," he said. "You can take a Russian teenager and say recite some poetry, and they will give you strophes of Pushkin. We can't do it. The only equivalent for an American under a certain age is literally Dr. Seuss or theme songs."

Until the 1960's, he maintains, informal cultural expression — like the experimental prose of Beatnik writers — was relegated to outsider status. But by the end of the decade, he insists, that had changed: the counterculture went mainstream, ushering in the laid-back new linguistic regime.

Over lunch, he ticked off the evidence: the Beatles and other rock 'n' roll bands became national obsessions; "Bell Telephone Hour," a prime-time television show featuring classical music, was canceled; Hollywood began to make movies like "Easy Rider" that captured the mumbling diction of everyday speech; participants at a Dartmouth College education conference declared that creative classroom learning should be stressed over grammar rules and formal essays.

At the same time, Mr. McWhorter argued, the Free Speech Movement was spreading on college campuses — along with expletive-laden posters, sit-ins and skimpy clothes. And black English, a language traditionally spoken, not written, was becoming increasingly popular among young people.

"During a counterculture era, when we've been taught not to trust anyone over 30 and that our leaders are corrupt, naturally the speech of the oppressed becomes more attractive," he said. "It's in this era that most pop music begins to be sung in a black accent even by white people who grew up in Connecticut."

Mr. McWhorter paints an elaborate picture of a culture in linguistic upheaval, but some scholars caution against singling out the 1960's as a time of unprecedented change.

"There has always been pop culture, or low culture, alongside the high," said Robin Lakoff, a professor of linguistics at the University of California at Berkeley who studies the effects of language on shaping social attitudes. "But because low culture has traditionally been nonliterate and unattended to by the higher punditry, it tends to vanish without much of a trace. So people like John compare an imaginary golden age of only high culture products with what we have today, when low culture's products exist for posterity on tape."

She might have cited Mr. McWhorter's book as an example of low and high culture co-existing side by side. Despite its high-minded content, it is written in a breezy, colloquial style that seems paradoxically to embody some of the linguistic traits that he deplores. Sentences like "Back in the day, rhetoric was how we sang our language to the skies" and "Linguistically, America eats with its face now" are common along with conversational locutions like "however that rubs you" or "the times were a-changin'."

The book's free-wheeling prose and unorthodox usage — Mr. McWhorter frequently combines a plural subject with a singular verb — has put off at least one critic, Jonathan Yardley in The Washington Post, who ended his review with the words: "Physician, heal thyself."

Yet Mr. McWhorter, who defends his writing style in the book, says it was a deliberate choice on his part. "I wrote the book in a style that channels speech in a way I certainly could not have gotten away with 40 years ago," he admitted. In part, he said, his goal was not to sound like a scold. But his prose is also, he insisted, a reflection of the era in which he was brought up.

"I'm very much a part of this," he said.

Copyright © 2003 The New York Times Company

Amen, Max Clio!

I recommend that Max Clio (cool nom de plume) contact the Educational Testing Service and offer himself to join the ranks of Faculty Consultants (Readers).

Max would receive unparalleled training in essay evaluation or assessment or grading or scoring. (I hesitated with grading because it is old-fashioned term in higher ed circles; I love the latest buzz words.) One summer week spent reading AP essays would gird Max's loins for the most difficult thing a college teacher must do, in my humble opinion (IMHO). For me, the best part of the piece are Max's takes on teaching in an open-admission institution. This isn't education, indeed! If this be (fair & balanced) elitism, so be it!


Tuesday, November 18, 2003
Grading on My Nerves

By MAX CLIO

I am an embedded observer in the decline of Western civilization. At least, that was the distinct sensation that came over me earlier today, while working my way through a pile of student papers.

I have never been keen on grading. I still remember my shock as a graduate instructor reading my first stack of papers. Someone actually handed in something this bad? Must it receive a passing grade, simply because it arrived on time? Others before me, I suspect, strived toward perfection on every assignment, then became instructors, only to discover the vast sea of student mediocrity.

Even so, at the other research institutions where I taught before arriving on this regional campus of a major state university, student papers were different. The typical batch contained a far larger proportion of talent and merit, and a far less demoralizing proportion of illiterate and semiliterate scribbling. Essential to a grader's morale is that occasional breathtaking paper that proves conclusively that excellence is attainable and expectations can be met. Those are the papers that sustain hope in the face of overwhelming evidence to the contrary. I don't get many now.

Autumn is especially rough. Amid the bright leaves and crisp air, the return to school is bracing. But once the grading phase of my courses begins, I remember that fall is when I teach mostly introductory-level courses to students who, in many cases, will not make it past one term of university life. Last year, I was forced to fail 20 percent of the students in a class of 40. "This isn't education," I told my dean. "It's triage." His reply: Nothing can be done. At an open-enrollment state institution, anyone who graduates from high school can take a whirl at university. After a term or two, many realize they aren't mature or prepared enough. A good number of them put in zero effort and seem to expect as little in return. A colleague who departed a chic private high school to teach here is astonished that she can dole out undesirable grades without consequence. "They don't come to your office," she said. "Their parents don't call."

On our campus, the average ACT score is 20.5. The range stretches, improbably, from a bare 13 to a respectable 32 (the maximum is 36). At the main campus in our system, the average score is 25. At a highly selective institution, the average would be a notch further up. In short, my students have the lowest scores among four-year university students. The pool here is closer to the general mill of high-school graduates than instructors at selective institutions ever see.

When I was first trying to find my bearings as an instructor, I read an elegant memoir of teaching by Wayne Booth, who taught English at the University of Chicago. His diary of his students' progress and seminar experiences was humane, but would, if replicated by me, seem absurdly precious. Instead of tales of college girls' intellectual coming of age, I would record such stories as the 50-something student of mine this term who, having missed four classes straight, told me it was because she had been evicted from her home. The first night she drank wine coolers. The other days she was moving to a new place.

Besides the humiliations of poverty and class, another simple fact intrudes here: High schools no longer prepare most students to express ideas coherently or follow accepted English, let alone carry on serious intellectual work. My students can read carefully, when they do the reading. They ask good questions about lectures, showing attentiveness and curiosity. They discuss ideas and texts capably. Their weak spot is writing. The task falls to me -- in courses ostensibly about specified topics, not composition -- to patch up these leaky vessels.

Some of my colleagues, I notice, take a pass on this challenge. Their introductory-level courses feature a series of multiple-choice tests. They apparently haven't the patience to read essays, and automatic tabulators all but eliminate the time they spend on grading. This bothers me. I hold that a university education ought to include a significant writing component, that student writing deserves substantial professorial comment, that every student can become a better writer with practice, and that this is the last effective chance for them to get practice and feedback. If not us, who?

I hold this conviction without being able to extend much confidence to any particular set of student essays. The prospect of working through the stack of papers sitting next to me right now, for example, is enough to send me into fits of distraction. Writing this column is one stalling tactic among many that I have invented. I have a powerful aptitude for evasion, delay, and self-protection when faced with the chore of grading.

In graduate school I knew a professor who poured herself a generous glass of cabernet sauvignon before sitting down to grade papers. I would follow suit myself, except somnolence would be the result. My approach is far less satisfactory. I turn irritable. I grade restlessly. I start one paper, get through a page, then turn to the next paper in the stack, hoping for something better. No sooner do I start the next paper than I discover its grave weaknesses and move on to another.

Eventually, I abandon this hopscotch of aversion and work through the pile in a more deliberate way. When I finish a paper, I pick up some published item as a reward to myself, a reminder that somewhere else in the world, there is writing worth reading. My first batch of papers this term was so bad that I suddenly vowed to read the entire Bible, cover to cover. I made it to Abraham and Isaac. I may finish it yet.

I have been known to procrastinate all weekend, to the point where I must wake up at 4 a.m. on Monday morning in order to finish the papers by the appointed hour. Once I get down to brass tacks, however, I am methodical. I make extensive comments in green ink on every paper -- a reflection of my belief that better writing comes with recognition of audience, an impression reinforced by readers' comments. I strive, in my marginal notation, not to be harsh or cruel, but rather suggestive and helpful.

I urge students to take a position on the assigned text's validity or merit, not just summarize it. I encourage them to put their thesis somewhere near the outset of the paper, not in the middle or at the end. I demand that they pay closer attention to grammar and spelling. (Many of my students believe that the following set of words, for example, qualifies as a sentence: "A state of equality.") I recommend Bartleby.com, where Strunk and White's The Elements of Style and various other useful guides are available free.

Reports have it that I am a tough grader. This seems to mean that I do not hand out A's like there is no tomorrow, not that I haven't considered it. A more lenient hand would undoubtedly boost my evaluation scores, good as they are. There is an ethical quandary, moreover, in the issue of whether to grade by conviction or on a relative scale. A scale of conviction holds students to a universal bar of excellence; a relative scale or "curve" judges them against their immediate peers. This dilemma is real. Is it reasonable to expect students in a backwater to rise to supreme heights? Shouldn't they be judged relative to their campus? Why should I be tight with the A's, if most students at Ivy League universities get them, as reports indicate? In the end, however, I have decided to award no paper a high grade if its excellence does not actually have my complete confidence. I judge by standard, not pool. That makes me "tough."

Sometimes, the vagaries of grading do result in self-doubt. One student was so grateful for his B recently that I wondered if I had been too easy on him. In another case, I wondered after the fact if I'd been consistent in giving one student a C+ while another got a B-. Subtle differences of gradation on humanities papers are hardly fast and firm. It is impossible to draw up absolute criteria that would tip an essay one way or the other in marginal cases. However, major differences of grading do reflect very different levels of quality. On the whole, I am confident that my grades are meaningful and fair. Grades are earned -- not given.

Interior vacillations of mind and spirit are inevitable in any grader with professional dedication and a conscience. All that can be done in the darkest hours before the dawn is to apply intellectual standards as best one can -- and hope for the rare student who rises to the occasion.

Astonishingly, it does happen once in a while. The other day I walked into class to overhear a group of students comparing my comments on their first papers. They were earnest, almost sweetly so. They vowed to improve their writing the next time around. Maybe that will be the batch I've been waiting for.

Max Clio is the pseudonym of an associate professor of history on a regional campus of a major Midwestern research university. He welcomes letters sent to max_clio@yahoo.com.

Copyright © 2003 Chronicle of Higher Education