Friday, October 31, 2014

¿O Hermano, Donde Estás?

John Ellis "Jeb" Bush, the second son of POTUS 41 and Babs "The Enforcer" Bush, was the first two-term Dumbo governor of Florida and — now out of office — is among the cast of thousands of Dumbos who aspire to grab the brass ring in 2016 and become POTUS 45. This nation has weathered both a son and a grandson of a former POTUS in the White House, but a brother act would be a first. Bushy folklore proclaimed that The Jebster was the golden son, not The Dubster, but the Landslide of 2000 saw the ascent of The Dubster as POTUS 43. But wait, there're more Bushes lurking in the trenches. The Jebster and his wife Columba have produced another aspirant: George Prescott Bush ("George P.") is the eldest of three offspring saying Papá y Mami when addressing The Jebster and Columba. The Spanish-speaking Bushes will really find a lot of support among the Xenophobic Dumbos. If this is a (fair & balanced) bowl of hot, steamin' Olla de Fusión, so be it.


[x The Nation]
The Third Time's A Charm
By The Deadline Poet (Calvin Trillin)

Tag Cloud of the following piece of writing

created at TagCrowd.com

Bushes, Led by W., Rally to Make Jeb ‘45’
—Headline, The New York Times

Should we have that family back?
They tend toward invading Iraq.
And let’s say Jeb triumphs. By then,
It might need invading again. Ω

[Calvin Trillin began his career as a writer for Time magazine. Since July 2, 1990, as a columnist at The Nation, Trillin has written his weekly "Deadline Poet" column: humorous poems about current events. Trillin has written considerably more pieces for The Nation than any other single person. A native of Kansas City, MO, Trillin received his BA from Yale College in 1957. He served in the army, and then joined Time.]

Copyright © 2014 The Nation



Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2014 Sapper's (Fair & Balanced) Rants & Raves

Thursday, October 30, 2014

The BIG Question O'The Day: Quo Vadis, USA?

Ebola, ISIS, and Russian antics in Ukraine are but three of the great problems facing the POTUS 44 and his foreign policy staff. But, wait, there's more: China, nukes in the hands of the Iranians/North Koreans, immigration, yada yada yada. The list of this country's shortcomings is long and growing longer by the day. Those who dwell in this bedevilled land love a quick and dirty solution to their problems: "It's our damn POTUS — worse than any of his predecessors. This country travels on a hard road filled with danger and the passengers on board want to blame the conductor for not making the ride less stressful. Well, as Barry Commoner said decades ago: "There is no free lunch." If this is a (fair & balanced) dose of reality, so be it.

[x The American Interest]
What Is America's Role In The World
By Ali Wyne

Tag Cloud of the following piece of writing

created at TagCrowd.com

"Rarely,” the New York Times observed this July, “has a president been confronted with so many seemingly disparate foreign policy crises all at once.” Some of these crises, like the ascent of the Islamic State of Iraq and the Levant (ISIL), are bloody and fast-moving. Others, like the civil war in Syria, are grisly, protracted, and slow-moving. Others are grinding along sufficiently slowly that they feel less like crises than enduring foreign-policy challenges: consider the impasse over Iran’s nuclear program, which Graham Allison likens to “a Cuban missile crisis in slow motion,” and China’s quiet but purposeful campaign to settle its maritime disputes, which will likely play out over several decades.

It is safe to assume that much of the foreign-policy debate between the Republican and Democratic presidential nominees in 2016 will center on how the United States should adjust its foreign policy in response to these crises. Some observers contend that the Obama administration was unwise to reorient America’s strategic focus toward the Asia-Pacific; in light of recent developments, particularly those of this year, they believe the U.S. must accord comparable priority to constraining the potential for future Russian revanchism and preventing terrorist outfits from consolidating their influence amid the disintegration of the Middle Eastern and North African order. Other observers retort that the U.S. should continue to prioritize the rebalance, noting that the Asia-Pacific’s centrality to the global economic and military balances is poised to rise indefinitely.

This debate about the distribution of America’s strategic equities is critical. Its prescriptive value will be limited, however, unless it is accompanied by—or, better yet, subordinated to—a more fundamental discussion of America’s role in the evolving world order. The clearer America’s understanding of that role is, the more discriminating the United States can be in appraising how significantly a given crisis threatens its central objectives in the world. The less clear that understanding is, the more likely it will be to pursue a foreign policy that simply proceeds in accordance with the crises of the day: firefighting is a compelling alibi, after all, when one is struggling to define the focus of one’s foreign policy. Unfortunately, however, unless the crises of the day neatly align with the tectonic shifts in world order (the latter of which should be a central determinant of U.S. strategy), a crisis-driven foreign policy will inevitably succumb to disorientation and exhaustion.

In its simplest conception, a discussion about America’s role can be distilled down to three questions: What objectives does it seek to achieve in world affairs? What objectives does it have the operational capacity to achieve? And what objectives lie at the intersection of those two sets? Henry Kissinger proposes the following questions in the conclusion of World Order (2014}:

What do we seek to prevent, no matter how it happens, and if necessary alone? The answer defines the minimum condition of the survival of the society.

What do we seek to achieve, even if not supported by any multilateral effort? These goals define the minimum objectives of the national strategy.

What do we seek to achieve, or prevent, only if supported by an alliance? This defines the outer limits of the country’s strategic aspirations as part of a global system.

What should we not engage in, even if urged by a multilateral group or an alliance? This defines the limiting condition of the American participation in world order.

Above all, what is the nature of the values that we seek to advance? What applications depend in part on circumstance?

While crises will inevitably shape the answers to these questions, their transience, as well as the haphazardness with which they occur, prevent them from offering enduring guidance about U.S. foreign policy.

There are several other factors that complicate America’s efforts to determine its world role, beginning with the gap between its operational capacity and its perceived imperatives. Economic weakness at home—comprising sluggish growth, high unemployment, and growing debt, among other phenomena—limits the potential scope of America’s engagement around the world. Moreover, notwithstanding its support for air strikes to counter ISIL’s territorial gains, the American public remains, on balance, reluctant to pursue a proactive foreign policy: according to a report [PDF] this June by the Pew Research Center, only 35 percent of Americans think “it’s best for the future of the country to be active in world affairs.” On the other hand, as the number of crises that challenge U.S. national interests grows, so does the pressure—from policymakers and observers at home and in allied countries—for the United States to be more engaged (the certainty with which it is argued that the U.S. should “do something,” it should be noted, often belies the absence of guidance on what the U.S. is actually to do).

A second factor is the absence of an overarching threat. Testifying before the Senate Select Committee on Intelligence in early 1993, James Woolsey famously observed that “we have slain a large dragon [the Soviet Union], but we live now in a jungle filled with a bewildering variety of poisonous snakes. And in many ways the dragon was easier to keep track of.” Most observers would agree that the number and variety of snakes have grown in the intervening two decades: consider, for example, the economic damage an individual or small organization can inflict via cyberspace. Still, it is hard to construe any of those snakes as an existential challenge. What about the rise of China, America’s putative superpower replacement? The International Monetary Fund estimates that its economy has overtaken America’s at purchasing power parity, and a range of respected organizations forecast that its defense spending could eclipse America’s well before the middle of the century. The Economist contends, moreover, that China is “not just challenging the existing world order. Slowly, messily, and apparently with no clear end in view, it is building a new one.”

Even if one concurs with this assessment, however, China is not an adversary. An increasingly formidable competitor? Yes. A growing rival in some respects? Yes. The only country that could credibly emerge as a peer competitor of the United States along current trend lines? Yes. But one need not indulge any illusions about China’s internal politics or strategic interests to appreciate that neither China’s dissolution nor terminal Chinese decline would advance U.S. national interests; rather, those phenomena would deal a blow to America’s fragile economic recovery, thereby further limiting its ability to engage abroad.

That reality is a third complicating factor. While Republican and Democratic presidents alike struggled to implement containment over nearly half a century, they agreed that that policy should culminate in the Soviet Union’s defeat. There is no such consensus about the goal of America’s policy towards China. As China’s comprehensive national power grows, it will become harder for the United States to maintain an equilibrium between the competitive dynamics that are intrinsic to their relationship and the collaborative ones that must be sustained for world order to progress. There is no self-evident way for the United States to reconcile its own narrative of exceptionalism with China’s. Nor, moreover, is there any clear way of concretizing a “new model” of great-power relations between two countries when neither one has any experience with or inclination towards sustaining world order in partnership with a possible equivalent.

A fourth factor is the potential for a vacuum in world order. Because no country or coalition besides the United States is either able or willing to replace it as the principal guarantor, some observers fear that U.S. abdication—whether deliberate or involuntary—would yield chaos. Richard Haass warns in the new issue of Foreign Affairs that “with U.S. hegemony waning but no successor waiting to pick up the baton, the likeliest future is one in which the current international system gives way to a disorderly one with a larger number of power centers acting with increasing autonomy, paying less heed to U.S. interests and preferences.” There are, of course, prominent dissenters: Ian Buruma and Barry Posen, for example, argue that it is simplistic and self-serving for the United States to posit a dichotomy between a world in which it plays the preponderant role in sustaining order and one that devolves into anarchy. Still, few observers are itching to test whether a world without a clear anchor can be equally peaceful and prosperous. While the United States may be tempted to preempt a vacuum by playing its current role indefinitely, that course also entails considerable risks: it could enervate the U.S. economy and encourage America’s allies in Europe and the Asia-Pacific to continue free-riding off of it. An exhausted United States and a network of U.S. allies that are unprepared to provide for their own security hardly provide an stable foundation for a new world order.

The United States may not be able to develop a grand strategy; indeed, in a world of ever-increasing complexity, perhaps the mere desire to attain one is quixotic. It is certain, however, that U.S. foreign policy will grow more incoherent the longer it postpones a candid discussion on its role in the world. Ω

[Ali Wyne is a member of the adjunct staff at the RAND Corporation and a contributing analyst at Wikistrat. In 2013 he coauthored a book with Graham Allison and Robert D. Blackwill — Lee Kuan Yew: The Grand Master’s Insights on China, the United States, and the World. Wyne received a BS (management science and political science) from the Massachusetts Institute of Technology.]

Copyright © 2014 The American Interest



Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2014 Sapper's (Fair & Balanced) Rants & Raves

Wednesday, October 29, 2014

Today, This Blog Introduces You To The Virtual (Internet) Equivalent Of Navin R. Johnson ("The Jerk")

Steve Martin said it best in "The Jerk":

[x YouTube/BryanReynolds Channel]
"The New Phonebook Is Here"
By Steve Martin

This post about Google Scholar provides an early link to that Google search tool. For a test ride, enter this blogger's name in the search window. Hint: rhymes with Steel Crapper or can be found in the left side of the page under "About Me." Hit the blue button to the right of the Google Scholar search window and see what you get. If this is a (fair & balanced) real life example of feeble-mindedness, so be it.

[x BackChannel]
Google Scholar Turns 10
By Steven Levy

Tag Cloud of the following piece of writing

created at TagCrowd.com

Anurag Acharya is the key inventor of Google Scholar, but the real origin of the project lies in his college years at the Kharagpur campus of the Indian Institute of Technology. The IIT is India’s version of MIT and Stanford combined, and has produced a long list of now-celebrated engineers and executives at Internet companies here and abroad. But even in that elite school, it was difficult for students to get hold of relevant scholarly materials. For Indian high schoolers, it was nearly impossible. “If you knew the information existed, you would write letters,” he says, “That’s what I did. Roughly half of the people would send you something, maybe a reprint. But if you didn’t know the information was there, there was nothing you could do about it.” Acharya was haunted by the realization that the great minds were deprived of inspiration, and the wonderful works that did have the impact they would have because of their limited distribution.

The eventual solution to this problem would be Google Scholar, which celebrates its tenth anniversary this November. Some people have never heard of this service, which treats publications from scholarly and professional journals as a separate corpus and makes it easy to find otherwise elusive information. Others have seen it occasionally when a result pops up on their search activity, and may even know enough to use it for a specific task, like digging into medical journals to gather information on a specific ailment. But for a significant and extremely impactful slice of the population: researchers, scientists, academics, lawyers, and students training in those fields — Scholar is a vital part of online existence, a lifeline to critical information, and an indispensable means of getting their work exposed to those who most need it.

But Acharya’s path towards its creation was a twisted one. He came to America for his doctorate and became an assistant professor of computer science at the University of California at Santa Barbara. He was successful but vaguely unsatisfied. He felt the problems he was tackling were not hard enough, were insufficiently sweeping to make a real difference. One day in 1999 he visited a colleague who had taken a temporary leave to work at an odd startup in Palo Alto, called Google. The visit came at a time when Acharya was reexamining his career, asking himself whether he was really grappling with hard, meaningful problems. The problem of search — essentially fulfilling Google’s mission at the time of organizing and granting access to the world’s information — seemed to be a problem worth solving. Especially since it resonated with his experience in his home country.

He joined Google in 2000, and for several years took charge of the technology of Google’s indexing. This is the system that “crawled” through all the Web, gathering all its contents so the company could provide the equivalent of a back-of-the-book index of the world’s biggest tome. Part of his job was expanding the index, convincing not only web administrators but also publishers, businesses and government agencies to allow Google to crawl their data. He was also in charge of keeping the index fresh, a massive task that involved pushing computer science to the limit. The job was high pressure. The system was nowhere near as stable as it would become, and after a few years Acharya was burnt out.

“Either I have to leave the company or I have to do something that can be interesting to me, but is lower pressure,” he now recalls as his mindset.

So he got permission to work with another engineer, Alex Verstak, to create Google Scholar, a free and widely accessible service that would live alongside search to address the problem that so thoroughly vexed him as a student. There were a number of challenges. The ranking signals that worked so well in general search were not always the best for researchers seeking knowledge.

On the other hand, there were advantages of introducing search to this particular body of knowledge. Unlike with general search, Scholar does not have to make tough guesses about a user’s intent. Obviously, there’s no chance someone using Scholar to look for a good Mexican restaurant or the directions to someone’s house—he or she is seeking an article or authors from a bounded set of sources that match the query. What matters in Scholar are the sources of scholarship, the subject matter and the identity of the authors. The sources were fairly easy to identify (though not always to crawl). “Scholarship is not an undifferentiated mass as in Web search,” he says. “If everybody in the scholarly field believes something is a scholarly source, then it is a scholarly source, because we are trying to present to the users what they are looking for.”

Also, the nature of academic papers presented some opportunities for more powerful ranking, particularly making use of the citations typically included in academic papers. Those same scholarly citations had been the original inspiration for PageRank, the technique that had originally made Google search more powerful than its competitors. Scholar was able to use them to effectively rank articles on a given query, as well as to identify relationships between papers.

After a number of tests and tweaks, the team showed the prototype to Larry Page. The co-founder’s reaction: “Why is this not live yet?” On November 18, 2004, Scholar was indeed live.

Google Scholar was revolutionary for a number of reasons. Acharya and his team worked hard to get academic publishers to allow Google to crawl their journals. Since many of the articles unearthed by Scholar were locked behind paywalls, simply locating something in a search would not mean that a user could read it. But he or she would know that it existed, and that makes a tremendous difference. (Imagine setting off on a research project and finding out months later that someone had done the same work.) Google also pushed the paywall publishers to allow users to see abstracts of the work. The world’s biggest online archive of journal articles, JSTOR, offered only scans of articles, and had no way to separate the abstract from the whole piece. (Those accessing JSTOR through subscribing institutions could see full text.) So Scholar convinced JSTOR to provide its users to see the first scanned page of the article for free. “Often the first page has the abstract, or in older articles you have the introduction,” says Acharya, whose job title at Google is Distinguished Engineer. “That at least allows you to get a sense of it so you can decide whether you should put in additional effort.” Google Scholar will then provide the information that will help users get the complete text, whether online for free, downloaded for a fee, or in a nearby library.

(All Google users benefited from all that newly crawled information, too, as the company included those articles and books in its general search index.)

At launch, Google Scholar won wide acclaim, even from those generally skeptical about the company. Two well known library scientists, Shirl Kennedy and Gary Price wrote, “When big announcements come from Google and web engines, we often get nervous…. Not this time, however. This is BIG news and something that should have been around for years.” (There was some criticism, though. One complaint was that Google Scholar had no API to allow other services to access it. Others said that since Google didn’t share information like its ranking algorithm and all its sources, it fell short of a “scholarly” standard.)

Some in the research community favorably contrasted it to Google’s more controversial Book Search, which was launched at the same time. Scholar avoided the sort of copyright controversy that Book Search generated, despite the fact scholarly publishing world is a war zone, with an increasing number of academics lodging protests against powerful publishers who control the major journals. This is a conflict pitting profit against public good. It was the principle of open research that led Internet activist Aaron Swartz to download a corpus of JSTOR documents legally provided to MIT; the government prosecution of that act ended only with Swartz’s suicide. Google Scholar does not officially take a stand on the issue, but its implicit philosophy seems to endorse an egalitarian spread of information. In any case, when possible, Scholar tries to help negotiate around paywalls for non-subscribers by linking to articles in multiple locations — often, authors of paywalled works have free versions on their personal websites.

Over the years Acharya has worked hard to change their minds. “It is knocking on one door after another,” he says. “Elsevier took three, four, five years. The American Chemical Society was somewhat slower, but largely it is knocking on door after door after door.”

Acharya has kept knocking on doors, because from the very moment Scholar launched, he has been devoted to improving the product. “The first version worked well, but I was not happy with it,” he says. Working with Verstak and a small team, he has consistently added features (one particularly useful addition identifies related articles to the ones ranked for a specific search) and even expanded Scholar’s reach to ambitious new realms, most notably judicial case law in 2009. (This was described as “a shot across the bow of the multi-billion dollar legal publishing business” which previously controlled that public information.) Acharya’s role spans not only engineering but operations, partner relations, library liaison, contracts, and evangelism.

The engineering isn’t an afterthought, though. A lot of artificial intelligence is necessary to keep improving the system. For instance, Archaya and Verstak got a patent for “Identifying a primary version of a document.” (By the way, I found out this factoid by using Google Scholar.)

Another innovation of Scholar has been its ability to correctly identify the authors of books and papers, an important feature for those interested in the work of a specific researcher. “”Scholarship tends to have a lot of authors named as ‘Jay Smith’ — there are a lot of Jay Smiths out there,” he says. “And if you think that’s as easy problem, think of the name Huang — there are about 200 Chinese last names that cover 95% of authors.” Google tackles this problem by creating clusters of papers that are likely to be written by the same individual and, for the last step, asks the actual authors (who almost inevitably use the service) to identify which groups of paper are theirs. Asking users directly to create search results, seems very un-Googley, but as Acharaya says, “We can’t automatically solve this problem entirely—so we just give you a list of clusters, you say, ‘These are mine,’ and you are done. The rest is automated.” Knowing who the authors are, Google can create profiles of where they fit into academia—who are their coauthors, who they have cited, who has cited them.

Acharya’s continued leadership of a single, small team (now consisting of nine) is unusual at Google, and not necessarily seen as a smart thing by his peers. By concentrating on Scholar, Acharya in effect removed himself from the fast track at Google. He was part of a number of amazingly talented Ph.D. engineers that [sic] joined the company around 2000, and some of them are still doing work vital to Google’s core, pushing boundaries of computer science and artificial intelligence. He has the engineering chops to work with them. But he can’t bear to leave his creation, even as he realizes that at Google’s current scale, Scholar is a niche.

Only at Google, of course, would the world’s most popular scholarly search service be seen as a relative backwater. Acharya isn’t permitted to reveal how big Scholar’s index is, though he does note that it’s an order of magnitude bigger than when it started. He can also say, “It’s pretty much everything — every major to medium size publisher in the world, scholarly books, patents, judicial opinions, small, most small journals…. It would take work to find something that’s not indexed.” (One serious estimate [PDF] places the index at 160 million documents as of May 2014.) But like it or not, the niche reality was reinforced after Larry Page took over as CEO in 2011, and adopted an approach of “more wood behind fewer arrows.” Scholar was not discarded — it still commands huge respect at Google which, after all, is largely populated by former academics—but clearly shunted to the back end of the quiver. Not only was Scholar missing from the list of top services (Image Search, News, etc.) but bumped from the menu promising “more” services like Gmail and Calendar. Its new place was a menu labeled “even more.”

Asked who informed him of what many referred to as Scholar’s “demotion,” Acharya says, “I don’t think they told me.” But he says that the lower profile isn’t a problem, because those who do use Scholar have no problem finding it. “If I had seen a drop in usage, I would worry tremendously,” he says. “There was no drop in usage. I also would have felt bad if I had been asked to give up resources, but we have always grown in both machine and people resources. I don’t feel demoted at all.”

Acharya is now 50. He’s excited about adding new features to Scholar — improving the “alerts” function and other forms that help users discover information important to them that they might not know is out there. Would he want to continue working on Scholar for another ten years? “One always believes there are other opportunities, but the problem is how to pursue them when you are in a place you like and you have been doing really well. I can do problems that seem very interesting me — but the biggest impact I can possible make is helping people who are solving the world’s problems to be more efficient. If I can make the world’s researchers ten percent more efficient, consider the cumulative impact of that. So if I ended up spending the next ten years going [doing?] this, I think I would be extremely happy.”

That satisfaction seems plenty for Acharya, especially when he thinks of the millions of people — everywhere from rural India to Mountain View, California — who have the world’s scholarship at their fingertips, for free. But will Google itself spring for at least a doodle on November 18, when Scholar turns ten? Ω

[Steven Levy is a freelance technology writer who was a senior writer for Wired, following a dozen years as chief technology writer and a senior editor for Newsweek. His most recent book is In The Plex: How Google Thinks, Works, and Shapes Our Lives (2011). Levy received a BA (English) from Temple University and an MA (literature) from the Pennsylvania State University.]

Copyright © 2014 Medium



Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2014 Sapper's (Fair & Balanced) Rants & Raves

Tuesday, October 28, 2014

Here's Another World-Class Example Of Magical Thinking

Duroc aviators will align their wings for a flyover above the frozen tundra of Hades before the good voters of Texas vote their asses (Donkeys?) over their bile ducts in any future elections in the Lone Star State. No offense to John B. Judis, but the swing from Ass to Bile voting occurred earlier than 1980 when St. Dutch became POTUS 40. This blogger was a voting judge in a Democrat primary precinct during the Texas primary election of 1976. That was the year that St. Dutch mounted an insurgent campaign to unseat the Dumbo non-elected POTUS 38 (thanks to The Trickster) for the Dumbo nomination. In the configuration of the 1976 primary election in Amarillo for that precinct, the Ass/Donkey polling place was in Russell Gym (former Women's Gymnasium) on the Collegium Excellens campus and the Dumbo polling place was in Carter Gym (former Men's Gymnasium). The gymnasia were located across the street from one another. In 1976, the election judge in each polling place stamped the aspiring voter's Voter Registration card with "Democrat Voter" if the aspirant stood at the table in the Ass/Donkey polling place and across the street, the aspirant's card was stamped with "Republican Voter." Once stamped and validated by a clerk, the aspirant was handed a punch-card ballot. The ballots were completely different in each location: the Ass/Donkey ballot contained only choice among Asses and the Dumbo ballot contained only choice among Dumbos. No crossover voting was allowed in Texas then (nor is it allowed now unless a sneaky Ass or Dumbo votes in the "wrong" primary in the hope of casting enough votes for a weaker candidate so that the best candidate will win the general election. Anyway, to make a long story short this blogger/voting judge was faced with a near riot when Dumbo voters in the wrong polling place couldn't find St. Dutch's name on the ballot. Do-Overs are not allowed in the Texas Election Code and demands that the "Democrat Voter" stamp be expunged so that the aspiring and mistaken voter could cross the street and vote for St. Dutch were met with cries of outrage when their demands for expungement were denied. This scene occurred in May 1976 when large numbers of voters who had always cast their ballot Ass/Donkey polling places tried to vote for St. Dutch. The Dumbos have been getting even for this electoral treachery ever since. If this is a (fair & balanced) story from the annals of the Texas Direct Primary, so be it.

[x TNR]
Yes, Texas Could Turn Blue
By John B. Judis

Tag Cloud of the following piece of writing

created at TagCrowd.com

“Do you really think Wendy Davis is going to win?” I asked Jenn Brown, the executive director of Battleground Texas. “I sure do,” she replied. Brown and her top staff may be the only people in Texas who think that Democrat Davis, who is running for governor, can defeat Republican Greg Abbott next week. But the larger question is whether Battleground Texas’s strategy of turning Texas Blue, which is currently married to Davis’s candidacy, can over the next two, four, or six years make Texas, which hasn’t elected a Democrat to statewide office since 1994, or voted for a Democratic presidential candidate since 1976, competitive again.

Battleground’s strategy, as it was presented to me during a recent visit to Texas, relies primarily on demographic trends within the state. Texas has already become a majority-minority state like California. According to 2013 census figures, only 44 percent of Texans are “Anglos,” or whites; 38.4 percent are Hispanic; 12.4 percent African-American; and the remainder Asian-American and native American. By 2020, Hispanics are projected by the Texas State Data Center to account for 40.5 percent of Texans and African-Americans for 11.3 percent compared to 41.1 percent of Anglos. Texas’s minorities generally favor Democrats over Republicans, but they don’t vote in as great a proportion as Anglos who have favored Republicans by similar percentages. Battleground’s strategy [PDF] assumes that if it and other organizations like the Texas Organizing Project can get many more minorities, and particularly Hispanics, to the polls, then, as minorities increasingly come to outnumber Anglos, Democrats can take back the state.

Battleground’s strategy has met with skepticism in some quarters. My former colleague Nate Cohn has argued that the numbers don’t add up. Former Republican Party executive director Wayne Thorburn argues in Red State (2014) that the “Texas Democratic Party may have a more serious deficit with Anglo voters than Republicans do with Hispanics.” Indeed, there are grounds for skepticism. If you take the numbers, but keep the turnout and the degree of party support consistent with the most recent election in 2012, then it is unlikely the Democrats could achieve a majority by 2020, and perhaps not in the following decade. But if you take politics into account—if you assume that developments in both parties could alter baseline projections—then the Battleground strategy looks far more plausible.

Let’s first look at how the population figures translate into votes. While the overall number of minorities already surpass that of Anglos, the numbers of voters have not. Who votes depends on how many in each group are eligible to vote. In 2014, about 46 percent of Hispanics are eligible to vote. The rest are not citizens or are under 18. By contrast, voter eligibility among whites is in the high seventy percent and among African Americans is in the low seventy percent range. The other factor is turnout. In 2012, only about 39 percent of eligible Hispanics voted compared to a little over sixty percent of Anglos and African-Americans. So in the 2012 election, and most likely in the 2014 election, in spite of Battleground’s considerable efforts, Anglo voters, who are likely to favor Republican candidates, will outnumber minority voters.

In 2020, a presidential election year, the numbers should look different. Minorities’ population edge should have increased, and eligibility among Hispanic voters, which has been growing, should be around 50 percent. I have tallied four scenarios for 2020. They show the conditions that would finally lead to a Democratic victory in 2020. (In each of these, I am keeping black turnout and support constant, and assuming that Asian and Native American eligibility and turnout increase slightly, and support for Democrats remains at about 60 percent. To be safe, I am also using the conservative Texas State Data Center figures, which some political scientists believe understate Hispanic growth.)

Scenario one: Hispanic turnout increases to 45 percent (which is still less than the national average for Hispanic voters) and support for the Democratic presidential candidate remains at 65 percent, and only 25 percent of whites back the Democratic candidate. In that case, the Republican candidate would get almost 54 percent of the vote.

Scenario two: Hispanic turnout increases to 50 percent(which is still less than neighboring New Mexico), and support for the Democratic candidate climbs to 72 percent (which is still less than Hispanic support for Democrats in Colorado), but white support for the Democrat remains at 25 percent. In this case, the Republican squeaks by with a little over 51 percent of the vote.

Scenario three: Hispanic turnout only increases to 45 percent and support remains at 65 percent, but the Democrat gets 30 percent of the white vote. The Republican squeaks by with a little over 50 percent of the vote.

Scenario four: Hispanic turnout remains at 50 percent and support at 72 percent, but white support for the Democrat climbs to 30 percent. Then the Democrat gets 51.5 of the vote.

In other words, a Democratic presidential candidate could carry Texas in 2020 if Hispanic turnout grows, support for the Democratic candidate nears or exceeds 70 percent, and Democrats gather 30 percent of the Anglo vote. If the Democrats can’t attract more than 25 percent of the Anglo vote, then even the most energetic efforts at Hispanic mobilization won’t get their candidate across the finish line.

Raising Hispanic turnout and support for Democratic candidates obviously requires the kind of voter mobilization that Battleground and other groups are undertaking. This year, Battleground claims to have recruited 32,000 volunteers to register voters and get them to the polls. Registration in the state’s five largest counties is up two percent, even though registration often goes down between presidential and mid-term elections. But success among Hispanics also depends on building organizations that function between elections. Battleground, which is an out-of-state creation, may not be best suited for this task. “Organizations come in for the election, and then they are gone,” Jorge Montiel, the lead organizer for San Antonio’s Metro Alliance, laments.

Success in mobilizing the Hispanic vote also depends on nominating candidates in Texas (and also nationally) who can appeal to these voters. According to several Democrats I talked to, Davis hasn’t “connected” to these voters. In the primaries, she even lost several small counties to a token Hispanic opponent. She is principally known in the state for her stand on behalf of abortion rights—whereas many of Texas’s Hispanics oppose abortion. Democrats urged San Antonio’s former mayor Julian Castro, now the secretary of Housing and Urban Development, to run, but he declined, probably one San Antonio political leader speculated, because he feared certain defeat.

Finally, success in increasing Hispanic support for Democrats will depend on what Republicans in Texas and nationally do. In Texas, Republican governors have steered clear of the harsh rhetoric about “illegal aliens” that proliferates among many other Republicans. Abbott boasts a Latina wife. As a result, Texas Republican candidates for state office have gotten about 40 percent of the Hispanic vote, which has virtually assured their victory. This year, the Hispanic Bush, George P. Bush, is currently running for Land Commissioner, and if he becomes a leader of party, could keep many Hispanics voting for Republicans in state races.

But there are Tea Party Republicans, including Senator Ted Cruz, who decry efforts at immigration reform. In Arlington this year, a suburb of Dallas-Fort Worth, a Tea Party favorite Tony Tinderholt, who ousted a moderate incumbent in the primary, has warned that “people are going to die” to protect the border from people “with plans to do horrible disgusting things to American citizens.” If Cruz and Tea Party types take over the Texas party, then it will become easier for Democrats to win votes in high state office, which are held between presidential elections.

Texas Democrats are likely to have an easier time painting the national party and its candidates as being hostile to Hispanics. In 2012, Obama got 71 percent of the Hispanic vote nationally against Mitt Romney, who used his opposition to immigration reform to win the nomination. Last year, Florida Senator Marco Rubio’s support for immigration reform appeared to doom his presidential prospects. So even if Texas’s Republicans foil Democratic efforts to boost their Hispanic support in state elections, national Republicans might help Democrats increase Hispanic support for a Democratic presidential candidate.

In Texas, white voters have blended the anti-government ethos of the West and the deep South. Many Texas white voters began changing their party allegiance from Democrat to Republican after 1980 without changing their ideology. But Texans’ bedrock conservatism among whites has been mitigated by in-migration from less Republican states and by the development of what Ruy Teixeira and I called “ideopolises”—large metro areas dominated by professionals who produce ideas. By garnering support in the Dallas, Austin, San Antonio, Houston, and El Paso metro areas, the Democrats might be able to get the 30 percent or more of the vote they need in presidential elections, and eventually the 35 percent they need in state elections.

In these metro areas, Texas Democrats can attract the same white voters who boosted Democrat hopes in states like Virginia and North Carolina: younger voters, who came of age after the Reagan-Bush era, professionals, and women. Davis’s candidacy has probably helped among these voters. In a late September poll that showed Davis behind Abbott by fourteen points, she still had an edge among women and voters 18 to 44, while getting trounced among male and older voters. (In the same poll, Davis only get 50 percent of Hispanic vote.) Mustafa Tameez, a Houston Democratic consultant, says that the Texas state legislature’s lurch to the right, which spawned Davis’s candidacy, will win over many of these voters. “The urban vote and women are the key to Democrats winning Texas,” Tameez says.

Texas Democrats’ ability to win over white voters will also depend on what happens to the national party. Obama remains deeply unpopular in Texas—identified with whatever failures white Texans ascribe to the federal government. There were no exit polls in the 2012 election, but Nate Cohn has estimated that Obama only got 20 percent of the white vote. Whites need to feel comfortable voting for a candidate identified with the national party. Tameez and other Democrats believe that Hillary Clinton, who defeated Obama in the 2008 Texas primary, will fare far better among the state’s Anglos than Obama did. But even if they nominate a candidate more palatable to urban whites, the Democrats may have to wait until 2020 to have a good shot at winning Texas in a presidential vote.

Of course, Texas Republican politicians understand the threat that the state’s demographic changes pose. Last year, Abbott warned that with the formation of Battleground, Texas was “coming under a new assault, an assault far more dangerous than when the leader of North Korea threatened when he said he was going to add Austin, Texas, as one of the recipients of his nuclear weapons.” Abbott and the Texas Republicans have responded to the threat with new restrictions of voting and on registering voters that are designed to make it more difficult for minorities to get to the polls. But these restrictions are double edged. They will make it more difficult to vote, but they can also provide a rallying cry for Battleground and other groups trying to get out the vote. They can give the lie to Republican claims that they are sympathetic to the state’s Hispanics and in so doing, speed the day of reckoning for Texas Republicanism. Ω

[John B. Judis is a senior editor at The New Republic and a contributing editor to The American Prospect. His first book was William F. Buckley, Jr.: Patron Saint of the Conservatives (1988) and most recently, Genesis: Truman, American Jews, and the Origins of the Arab/Israeli Conflict (2014). Judis received both a BA and an MA in philosophy from the University of California at Berkeley.]

Copyright © 2014 The New Republic



Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2014 Sapper's (Fair & Balanced) Rants & Raves

Monday, October 27, 2014

Roll Over, Tom Tomorrow — Our Worst Current Health Crisis Is Dumbo/Moron Asininity

A pair of epidemics that Tom Tomorrow missed disgnosing were Dumbo 4-Axe-Handle-Dumbassitude and Moron Bottomless Stupidity. If you ask "How dumb are the Dumbos/Morons?" this blogger would respond: "How high is up?" or "How long is a piece of string?" Now, we have the spectacle of Governor Andrew Coma (D-NY) impersonating Governor Heavy-Jumbo (R-NJ) with a xenophobic quarantine policy toward airline arrivals from Ebola Ground Zero in West Africa without any evidence of infection. If is is a (fair & balanced) red badge of stupidity, so be it.

[x This Modern World]
Other Dangerous Epidemics
By Tom Tomorrow

Tom Tomorrow/Dan Perkins

[Dan Perkins is an editorial cartoonist better known by the pen name "Tom Tomorrow". His weekly comic strip, "This Modern World," which comments on current events from a strong liberal perspective, appears regularly in approximately 150 papers across the U.S., as well as on Daily Kos. The strip debuted in 1990 in SF Weekly. Perkins, a long time resident of Brooklyn, New York, currently lives in Connecticut. He received the Robert F. Kennedy Award for Excellence in Journalism in both 1998 and 2002. When he is not working on projects related to his comic strip, Perkins writes a daily political weblog, also entitled "This Modern World," which he began in December 2001. More recently, Dan Perkins, pen name Tom Tomorrow, was named the winner of the 2013 Herblock Prize for editorial cartooning.]

Copyright © 2014 Tom Tomorrow (Dan Perkins)



Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2014 Sapper's (Fair & Balanced) Rants & Raves

Sunday, October 26, 2014

Roll Over, FDR — In The 21st Century, The Only Thing We Have To Fear Is... Nostalgia?

We are living in troubled times. According to Yuval Levin we yearn for the wrong things: the Left yearns for the 1960s (the Age of Aquarius) and the Right is jonesin' for the 1980s (the Age of St. Dutch). Thus we are caught between between Scylla and Charybdis as we bumble and stumble our way forward. We are left with a nostalgia for the "good ol' days" and a golden age that never really existed — except in the faulty memories of those on the Left and those on the Right. No wonder this nation seems rudderless. If this is a (fair & balanced) realization that we actually live in the Age of SNAFU, so be it.

[x First Things]
Blinded By Nostalgia
By Yuval Levin

Tag Cloud of the following piece of writing

created at TagCrowd.com

The twenty-first century has been a time of transition in American life. In our economy, our culture, our politics, and throughout our society, longstanding norms seem to be breaking down. Times of uneasy transition are often characterized by a politics of nostalgia for the peak of the passing order, and ours most definitely is.

Some on the left and right alike understandably miss the growth and opportunity of American life in the decades after the Second World War—a dynamism seemingly lost in the 1970s but regained in the ’80s and ’90s, if in a more frantic and less broad and stable way. Every monthly unemployment report and quarterly growth projection is now trailed by anguished concern about when we will finally snap back to those patterns.

Some miss the relative social consensus and broadly shared values of those postwar years. The most important conservative book of the Obama era—Charles Murray’s Coming Apart (2012)—pines for that consensus and its breadth. For all its many virtues, Murray’s book takes America in 1963 as its standard and painstakingly quantifies our falling away from it along some key social indicators.

Some miss the way we used to think about the future in that half-century after the war. On the right, this often takes the form of Reagan nostalgia. Ronald Reagan believed the promise of postwar America could be realized without the expansion of the welfare state it had engendered, and his economic reforms brought back the roaring growth that had characterized that period and so helped extend the golden age awhile. On the left, this nostalgia takes the form of yearning for renewed faith in precisely the welfare-state liberalism Reagan opposed. The most important progressive book of the Obama era—Lane Kenworthy’s Social Democratic America (2014)—argues for a recovery of the belief in that promise, even in the face of the undeniable costs it would entail and political difficulties it would confront. It seeks to salvage an old vision of the future.

Some, meanwhile, miss the seemingly harmonious politics of that era, in contrast to today’s polarization and supposed paralysis. In his 2006 book, The Audacity of Hope, Barack Obama looked longingly to a “time before the fall, a golden age in Washington when, regardless of which party was in power, civility reigned and government worked.” Many older Washingtonians think this way of what has happened to our politics.

Much of this is false nostalgia, of course. This vision of the postwar era is not quite wrong, but it is grossly incomplete. The trends and attitudes it hearkens to really existed, but the story it tells leaves little room for the epic battles over communism, civil rights, Vietnam, Watergate, détente, Reaganomics, and countless other fronts; little room for the burning cities, the political assassinations, the campus radicalism, or the social breakdown of that time; and little room for the costly errors and colossal failures of the politics of quiet conversations.

But true or false, the sum of these related nostalgias of the left and right is almost the full sum of our politics today, and that is a serious problem. It causes us to think of the future in terms of what we stand to lose rather than where we are headed, and has left Americans unusually pessimistic and uneasy.

America’s postwar strength was a function of unrepeatable circumstances. Our global competitors had burned each other’s economies to the ground while ours had only grown stronger in the war years. And a generation of Americans was shaped by the Great Depression and the war to be unusually unified and unusually trusting in large institutions. That combination was hardly the American norm; it was an extremely unusual mix that we cannot recreate and should not want to. Yet that WWII generation and its children, the baby boomers, came to expect American life to work that way.

The biggest problem with our politics of nostalgia is its disconnection from the present and therefore its blindness to the future. While we mourn the passing postwar order, we are missing some key things about the order now rising to replace it.

Perhaps the foremost trend our nostalgia keeps us from seeing is the vast decentralization of American life, which has characterized the early years of this century and looks only to grow. The postwar order was dominated by large institutions: big government, big business, big labor, big media, big universities, mass culture. But in every area of our national life—or at least every area except government—we are witnessing the replacement of large, centralized institutions by smaller, decentralized networks.

Younger Americans are growing up amid a profusion of options in every realm of life, with far more choice but far less predictability and security. Dynamism is increasingly driven not by economies of scale but by competitively-driven marginal improvements. Our culture is becoming a sea of subcultures. Sources of information, entertainment, and education are proliferating.

The near-total (and bipartisan) failure of our politics to confront these changes explains a lot of the dysfunction of our government today, and much of our frustration with it. Successful lives in the postwar era involved effectively navigating our large institutions and making the most of the benefits they offered. Success in the coming era will increasingly involve effectively navigating a profusion of smaller networks, and a government that wants to help people flourish will need to retool—focusing more on enabling bottom-up, incremental improvements and less on managing top-down, centralized systems. Both empowering individuals and offering them security will look rather different in this era.

This could be a boon for conservatives in some respects, as some of them already incline to a decentralized approach to policy, and a challenge for liberals who will need to think anew about how government might help the country thrive in this era. But neither liberals nor conservatives seem ready to face these changes. So the left always behaves as though it’s 1965 and the right as though it’s 1980.

On the cultural front, the tendency of decentralization to undermine all authoritative institutions will present more of a challenge for the right. Social conservatives are so far experiencing this transition as a loss of their dominant position in the culture. But they should see that this generally means not that their opponents are coming to dominate but that no one is. They should judge their prospects less in terms of their hold on our big institutions and more in terms of their success in forming a thriving and appealing subculture, or network of subcultures. Christianity has a great deal of experience in that difficult art, of course, but it is largely out of practice in our society.

Much the same is true for America more generally. Many economic, cultural, and political debates of the coming years will revolve around the promise and the dangers of decentralization. Americans have a lot of experience dealing with that promise and those dangers, but it is not the experience of the exceptional decades of the postwar era.

To regain our footing in the twenty-first century, we need to get over our blinding nostalgia for that unusual time. Ω

[Yuval Levin is the editor of National Affairs. He is also the Hertog Fellow at the Ethics and Public Policy Center, a senior editor of The New Atlantis, and a contributing editor to National Review and the Weekly Standard. He has been a member of the White House domestic policy staff (under President George W. Bush), executive director of the President’s Council on Bioethics, and a congressional staffer. His essays and articles have appeared in numerous publications including The New York Times, The Washington Post, The Wall Street Journal, and Commentary. Levin received a BA (political science) from American University and a PhD (social thought) from the University of Chicago.]

Copyright © 2014 First Things



Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2014 Sapper's (Fair & Balanced) Rants & Raves

Saturday, October 25, 2014

Wiki ! (As They Say in Maui), Use This Link !

Someone near and dear to this blogger is an attorney with a scholarly bent and she sneers at Wikipedia. After this post goes up, this blogger will e-mail her a link to this forthright defense of Wikipedia. If she reads Issacson, perhaps she will amend her animus. Perhaps not. You can lead your daughter to drink, but you cannot make her sit down. Youneverknow, as the great philosopher (Joaquin Andujar) said in a radio interview. If this is (fair & balanced) reversion, so be it.

[x The Daily Beast]
You Can Look It Up: The Wikipedia Story
By Walter Issacson

Tag Cloud of the following piece of writing

created at TagCrowd.com

[From The Innovators: How A Group Of Hackers, Geniuses, And Geeks Created The Digital Revolution (2014) by Walter Isaacson.]

Ward Cunningham, Jimmy Wales, and the Wonder of Wikis

When he launched the Web in 1991, Tim Berners-Lee intended it to be used as a collaboration tool, which is why he was dismayed that the Mosaic browser did not give users the ability to edit the Web pages they were viewing. It turned Web surfers into passive consumers of published content. That lapse was partly mitigated by the rise of blog­ging, which encouraged user-generated content. In 1995 another me­dium was invented that went further toward facilitating collaboration on the Web. It was called a wiki, and it worked by allowing users to modify Web pages—not by having an editing tool in their browser but by clicking and typing directly onto Web pages that ran wiki software.

The application was developed by Ward Cunningham, another of those congenial Midwest natives (Indiana, in his case) who grew up making ham radios and getting turned on by the global communities they fostered. After graduating from Purdue, he got a job at an elec­tronic equipment company, Tektronix, where he was assigned to keep track of projects, a task similar to what Berners-Lee faced when he went to CERN.

To do this he modified a superb software product developed by one of Apple’s most enchanting innovators, Bill Atkinson. It was called HyperCard, and it allowed users to make their own hyper-linked cards and documents on their computers. Apple had little idea what to do with the software, so at Atkinson’s insistence Apple gave it away free with its computers. It was easy to use, and even kids—especially kids—found ways to make HyperCard stacks of linked pictures and games.

Cunningham was blown away by HyperCard when he first saw it, but he found it cumbersome. So he created a super simple way of creating new cards and links: a blank box on each card in which you could type a title or word or phrase. If you wanted to make a link to Jane Doe or Harry’s Video Project or anything else, you simply typed those words in the box. “It was fun to do,” he said.

Then he created an Internet version of his HyperText program, writing it in just a few hundred lines of Perl code. The result was a new content management application that allowed users to edit and contribute to a Web page. Cunningham used the application to build a service, called the Portland Pattern Repository, that allowed soft­ware developers to exchange programming ideas and improve on the patterns that others had posted. “The plan is to have interested parties write web pages about the People, Projects and Patterns that have changed the way they program,” he wrote in an announcement posted in May 1995. “The writing style is casual, like email . . . Think of it as a moderated list where anyone can be moderator and everything is archived. It’s not quite a chat, still, conversation is possible.”

Now he needed a name. What he had created was a quick Web tool, but QuickWeb sounded lame, as if conjured up by a com­mittee at Microsoft. Fortunately, there was another word for quick that popped from the recesses of his memory. When he was on his honeymoon in Hawaii thirteen years earlier, he remembered, “the airport counter agent directed me to take the wiki wiki bus between terminals.” When he asked what it meant, he was told that wiki was the Hawaiian word for quick, and wiki wiki meant superquick. So he named his Web pages and the software that ran them WikiWikiWeb, wiki for short.

In his original version, the syntax Cunningham used for creating links in a text was to smash words together so that there would be two or more capital letters—as in Capital Letters—in a term. It be­came known as CamelCase, and its resonance would later be seen in scores of Internet brands such as AltaVista, MySpace, and YouTube.

WardsWiki (as it became known) allowed anyone to edit and contribute, without even needing a password. Previous versions of each page would be stored, in case someone botched one up, and there would be a “Recent Changes” page so that Cunningham and others could keep track of the edits. But there would be no supervisor or gatekeeper preapproving the changes. It would work, he said with cheery midwestern optimism, because “people are generally good.” It was just what Berners-Lee had envisioned, a Web that was read-write rather than read-only. “Wikis were one of the things that allowed col­laboration,” Berners-Lee said. “Blogs were another.”

Like Berners-Lee, Cunningham made his basic software available for anyone to modify and use. Consequently, there were soon scores of wiki sites as well as open-source improvements to his software. But the wiki concept was not widely known beyond software engineers until January 2001, when it was adopted by a struggling Internet entrepreneur who was trying, without much success, to build a free, online encyclopedia.

Jimmy Wales was born in 1966 in Huntsville, Alabama, a town of rednecks and rocket scientists. Six years earlier, in the wake of Sput­nik, President Eisenhower had personally gone there to open the Marshall Space Flight Center. “Growing up in Huntsville during the height of the space program kind of gave you an optimistic view of the future,” Wales observed. “An early memory was of the windows in our house rattling when they were testing the rockets. The space program was basically our hometown sports team, so it was exciting and you felt it was a town of technology and science.”

Wales, whose father was a grocery store manager, went to a one-room private school that was started by his mother and grandmother, who taught music. When he was three, his mother bought a World Book Encyclopedia from a door-to-door salesman; as he learned to read, it became an object of veneration. It put at his fingertips a cor­nucopia of knowledge along with maps and illustrations and even a few cellophane layers of transparencies you could lift to explore such things as the muscles, arteries, and digestive system of a dissected frog. But Wales soon discovered that the World Book had shortcom­ings: no matter how much was in it, there were many more things that weren’t. And this became more so with time. After a few years, there were all sorts of topics—moon landings and rock festivals and protest marches, Kennedys and kings—that were not included. World Book sent out stickers for owners to paste on the pages in order to update the encyclopedia, and Wales was fastidious about doing so. “I joke that I started as a kid revising the encyclopedia by stickering the one my mother bought.”

After graduating from Auburn and a halfhearted stab at graduate school, Wales took a job as a research director for a Chicago financial trading firm. But it did not fully engage him. His scholarly attitude was combined with a love for the Internet that had been honed by playing Multi-User Dungeons fantasies, which were essentially crowdsourced games. He founded and moderated an Internet mailing list discussion on Ayn Rand, the Russian-born American writer who espoused an objectivist and libertarian philosophy. He was very open about who could join the discussion forum, frowned on rants and the personal attack known as flaming, and managed comportment with a gentle hand. “I have chosen a ‘middle-ground’ method of moderation, a sort of behind-the-scenes prodding,” he wrote in a posting.

Before the rise of search engines, among the hottest Internet ser­vices were Web directories, which featured human-assembled lists and categories of cool sites, and Web rings, which created through a common navigation bar a circle of related sites that were linked to one another. Jumping on these bandwagons, Wales and two friends in 1996 started a venture that they dubbed BOMIS, for Bitter Old Men in Suits, and began casting around for ideas. They launched a panoply of startups that were typical of the dotcom boom of the late ’90s: a used-car ring and directory with pictures, a food-ordering service, a business directory for Chicago, and a sports ring. After Wales relo­cated to San Diego, he launched a directory and ring that served as “kind of a guy-oriented search engine,” featuring pictures of scantily clad women.

The rings showed Wales the value of having users help generate the content, a concept that was reinforced as he watched how the crowds of sports bettors on his site provided a more accurate morning line than any single expert could. He also was impressed by Eric Ray­mond’s The Cathedral and the Bazaar (1999), which explained why an open and crowd-generated bazaar was a better model for a website than the carefully controlled top-down construction of a cathedral.

Wales next tried an idea that reflected his childhood love of the World Book: an online encyclopedia. He dubbed it Nupedia, and it had two attributes: it would be written by volunteers, and it would be free. It was an idea that had been proposed in 1999 by Richard Stallman, the pioneering advocate of free software. Wales hoped eventually to make money by selling ads. To help develop it, he hired a doctoral student in philosophy, Larry Sanger, whom he first met in online discussion groups. “He was specifically interested in finding a philoso­pher to lead the project,” Sanger recalled.

Sanger and Wales developed a rigorous, seven-step process for creating and approving articles, which included assigning topics to proven experts, whose credentials had been vetted, and then putting the drafts through outside expert reviews, public reviews, professional copy editing, and public copy editing. “We wish editors to be true experts in their fields and (with few exceptions) possess Ph.Ds.,” the Nupedia policy guidelines stipulated.92 “Larry’s view was that if we didn’t make it more academic than a traditional encyclopedia, people wouldn’t believe in it and respect it,” Wales explained. “He was wrong, but his view made sense given what we knew at the time.” The first article, published in March 2000, was on atonality by a scholar at the Johannes Gutenberg University in Mainz, Germany.

It was a painfully slow process and, worse yet, not a lot of fun. The whole point of writing for free online, as Justin Hall had shown, was that it produced a jolt of joy. After a year, Nupedia had only about a dozen articles published, making it useless as an encyclopedia, and 150 that were still in draft stage, which indicated how unpleasant the process had become. It had been rigorously engineered not to scale.

This hit home to Wales when he decided that he would personally write an article on Robert Merton, an economist who had won the Nobel Prize for creating a mathematical model for markets contain­ing derivatives. Wales had published a paper on option pricing theory, so he was very familiar with Merton’s work. “I started to try to write the article and it was very intimidating, because I knew they were going to send my draft out to the most prestigious finance professors they could find,” Wales said. “Suddenly I felt like I was back in grad school, and it was very stressful. I realized that the way we had set things up was not going to work.”

That was when Wales and Sanger discovered Ward Cunningham’s wiki software. Like many digital-age innovations, the application of wiki software to Nupedia in order to create Wikipedia—combining two ideas to create an innovation—was a collaborative process in­volving thoughts that were already in the air. But in this case a very non-wiki-like dispute erupted over who deserved the most credit.

The way Sanger remembered the story, he was having lunch in early January 2001 at a roadside taco stand near San Diego with a friend named Ben Kovitz, a computer engineer. Kovitz had been using Cunningham’s wiki and described it at length. It then dawned on Sanger, he claimed, that a wiki could be used to help solve the problems he was having with Nupedia. “Instantly I was considering whether wiki would work as a more open and simple editorial system for a free, collaborative encyclopedia,” Sanger later recounted. “The more I thought about it, without even having seen a wiki, the more it seemed obviously right.” In his version of the story, he then convinced Wales to try the wiki approach.

Kovitz, for his part, contended that he was the one who came up with the idea of using wiki software for a crowdsourced encyclopedia and that he had trouble convincing Sanger. “I suggested that instead of just using the wiki with Nupedia’s approved staff, he open it up to the general public and let each edit appear on the site immediately, with no review process,” Kovitz recounted. “My exact words were to allow ‘any fool in the world with Internet access’ to freely modify any page on the site.” Sanger raised some objections: “Couldn’t total idiots put up blatantly false or biased descriptions of things?” Kovitz replied, “Yes, and other idiots could delete those changes or edit them into something better.”

As for Wales’s version of the story, he later claimed that he had heard about wikis a month before Sanger’s lunch with Kovitz. Wikis had, after all, been around for more than four years and were a topic of discussion among programmers, including one who worked at BOMIS, Jeremy Rosenfeld, a big kid with a bigger grin. “Jeremy showed me Ward’s wiki in December 2000 and said it might solve our problem,” Wales recalled, adding that when Sanger showed him the same thing, he responded, “Oh, yes, wiki, Jeremy showed me this last month.” Sanger challenged that recollection, and a nasty cross­fire ensued on Wikipedia’s discussion boards. Wales finally tried to de-escalate the sniping with a post telling Sanger, “Gee, settle down,” but Sanger continued his battle against Wales in a variety of forums.

The dispute presented a classic case of a historian’s challenge when writing about collaborative creativity: each player has a different rec­ollection of who made which contribution, with a natural tendency to inflate his own. We’ve all seen this propensity many times in our friends, and perhaps even once or twice in ourselves. But it is ironic that such a dispute attended the birth of one of history’s most collab­orative creations, a site that was founded on the faith that people are willing to contribute without requiring credit. (Tellingly, and laudably, Wikipedia’s entries on its own history and the roles of Wales and Sanger have turned out, after much fighting on the discussion boards, to be bal­anced and objective.)

More important than determining who deserved credit is ap­preciating the dynamics that occur when people share ideas. Ben Kovitz, for one, understood this. He was the player who had the most insightful view—call it the “bumblebee at the right time” theory—on the collaborative way that Wikipedia was created. “Some folks, aim­ing to criticize or belittle Jimmy Wales, have taken to calling me one of the founders of Wikipedia, or even ‘the true founder,’” he said. “I suggested the idea, but I was not one of the founders. I was only the bumblebee. I had buzzed around the wiki flower for a while, and then pollinated the free-encyclopedia flower. I have talked with many oth­ers who had the same idea, just not in times or places where it could take root.”

That is the way that good ideas often blossom: a bumblebee brings half an idea from one realm, and pollinates another fertile realm filled with half-formed innovations. This is why Web tools are valuable, as are lunches at taco stands.

Cunningham was supportive, indeed delighted when Wales called him up in January 2001 to say he planned to use the wiki software to juice up his encyclopedia project. Cunningham had not sought to patent or copyright either the software or the wiki name, and he was one of those innovators who was happy to see his products become tools that anyone could use or adapt.

At first Wales and Sanger conceived of Wikipedia merely as an adjunct to Nupedia, sort of like a feeder product or farm team. The wiki articles, Sanger assured Nupedia’s expert editors, would be rel­egated to a separate section of the website and not be listed with the regular Nupedia pages. “If a wiki article got to a high level it could be put into the regular Nupedia editorial process,” he wrote in a post. Nevertheless, the Nupedia purists pushed back, insisting that Wiki­pedia be kept completely segregated, so as not to contaminate the wisdom of the experts. The Nupedia Advisory Board tersely declared on its website, “Please note: the editorial processes and policies of Wikipedia and Nupedia are totally separate; Nupedia editors and peer reviewers do not necessarily endorse the Wikipedia project, and Wikipedia contributors do not necessarily endorse the Nupedia project.” Though they didn’t know it, the pedants of the Nupedia priesthood were doing Wikipedia a huge favor by cutting the cord.

Unfettered, Wikipedia took off. It became to Web content what GNU/Linux was to software: a peer-to-peer commons collabora­tively created and maintained by volunteers who worked for the civic satisfactions they found. It was a delightful, counterintuitive concept, perfectly suited to the philosophy, attitude, and technology of the Internet. Anyone could edit a page, and the results would show up instantly. You didn’t have to be an expert. You didn’t have to fax in a copy of your diploma. You didn’t have to be authorized by the Powers That Be. You didn’t even have to be registered or use your real name. Sure, that meant vandals could mess up pages. So could idiots or ideologues. But the software kept track of every version. If a bad edit appeared, the community could simply get rid of it by clicking on a “revert” link. “Imagine a wall where it was easier to remove graffiti than add it” is the way the media scholar Clay Shirky explained the process. “The amount of graffiti on such a wall would depend on the commitment of its defenders.” In the case of Wikipedia, its de­fenders were fiercely committed. Wars have been fought with less intensity than the reversion battles on Wikipedia. And somewhat amazingly, the forces of reason regularly triumphed.

One month after Wikipedia’s launch, it had a thousand articles, approximately seventy times the number that Nupedia had after a full year. By September 2001, after eight months in existence, it had ten thousand articles. That month, when the September 11 attacks occurred, Wikipedia showed its nimbleness and usefulness; contribu­tors scrambled to create new pieces on such topics as the World Trade Center and its architect. A year after that, the article total reached forty thousand, more than were in the World Book that Wales’s mother had bought. By March 2003 the number of articles in the English-language edition had reached 100,000, with close to five hundred ac­tive editors working almost every day. At that point, Wales decided to shut Nupedia down.

By then Sanger had been gone for a year. Wales had let him go. They had increasingly clashed on fundamental issues, such as Sanger’s desire to give more deference to experts and scholars. In Wales’s view, “people who expect deference because they have a Ph.D. and don’t want to deal with ordinary people tend to be annoying.” Sanger felt, to the contrary, that it was the nonacademic masses who tended to be annoying. “As a community, Wikipedia lacks the habit or tra­dition of respect for expertise,” he wrote in a New Year’s Eve 2004 manifesto that was one of many attacks he leveled after he left. “A policy that I attempted to institute in Wikipedia’s first year, but for which I did not muster adequate support, was the policy of respect­ing and deferring politely to experts.” Sanger’s elitism was rejected not only by Wales but by the Wikipedia community. “Consequently, nearly everyone with much expertise but little patience will avoid ed­iting Wikipedia,” Sanger lamented.

Sanger turned out to be wrong. The uncredentialed crowd did not run off the experts. Instead the crowd itself became the expert, and the experts became part of the crowd. Early in Wikipedia’s devel­opment, I was researching a book about Albert Einstein and I noticed that the Wikipedia entry on him claimed that he had traveled to Al­bania in 1935 so that King Zog could help him escape the Nazis by getting him a visa to the United States. This was completely untrue, even though the passage included citations to obscure Albanian websites where this was proudly proclaimed, usually based on some third-hand series of recollections about what someone’s uncle once said a friend had told him. Using both my real name and a Wikipedia han­dle, I deleted the assertion from the article, only to watch it reappear. On the discussion page, I provided sources for where Einstein actu­ally was during the time in question (Princeton) and what passport he was using (Swiss). But tenacious Albanian partisans kept reinserting the claim. The Einstein-in-Albania tug-of-war lasted weeks. I became worried that the obstinacy of a few passionate advocates could under­mine Wikipedia’s reliance on the wisdom of crowds. But after a while, the edit wars ended, and the article no longer had Einstein going to Albania. At first I didn’t credit that success to the wisdom of crowds, since the push for a fix had come from me and not from the crowd. Then I realized that I, like thousands of others, was in fact a part of the crowd, occasionally adding a tiny bit to its wisdom.

A key principle of Wikipedia was that articles should have a neutral point of view. This succeeded in producing articles that were generally straightforward, even on controversial topics such as global warming and abortion. It also made it easier for people of different viewpoints to collaborate. “Because of the neutrality policy, we have partisans working together on the same articles,” Sanger explained. “It’s quite remarkable.” The community was usually able to use the lodestar of the neutral point of view to create a consensus article offering competing views in a neutral way. It became a model, rarely emulated, of how digital tools can be used to find common ground in a contentious society.

Not only were Wikipedia’s articles created collaboratively by the community; so were its operating practices. Wales fostered a loose system of collective management, in which he played guide and gentle prodder but not boss. There were wiki pages where users could jointly formulate and debate the rules. Through this mechanism, guidelines were evolved to deal with such matters as reversion practices, media­tion of disputes, the blocking of individual users, and the elevation of a select few to administrator status. All of these rules grew organically from the community rather than being dictated downward by a cen­tral authority. Like the Internet itself, power was distributed. “I can’t imagine who could have written such detailed guidelines other than a bunch of people working together,” Wales reflected. “It’s common in Wikipedia that we’ll come to a solution that’s really well thought out because so many minds have had a crack at improving it.”

As it grew organically, with both its content and its governance sprouting from its grassroots, Wikipedia was able to spread like kudzu. At the beginning of 2014, there were editions in 287 lan­guages, ranging from Afrikaans to Žemaitška. The total number of articles was 30 million, with 4.4 million in the English-language edi­tion. In contrast, the Encyclopedia Britannica, which quit publishing a print edition in 2010, had eighty thousand articles in its electronic edition, less than 2 percent of the number in Wikipedia. “The cumu­lative effort of Wikipedia’s millions of contributors means you are a click away from figuring out what a myocardial infarction is, or the cause of the Agacher Strip War, or who Spangles Muldoon was,” Clay Shirky has written. “This is an unplanned miracle, like ‘the market’ deciding how much bread goes in the store. Wikipedia, though, is even odder than the market: not only is all that material contributed for free, it is available to you free.” The result has been the greatest collaborative knowledge project in history.

So why do people contribute? Harvard Professor Yochai Benkler dubbed Wikipedia, along with open-source software and other free collaborative projects, examples of “commons-based peer produc­tion.” He explained, “Its central characteristic is that groups of in­dividuals successfully collaborate on large-scale projects following a diverse cluster of motivational drives and social signals, rather than either market prices or managerial commands.” These motivations include the psychological reward of interacting with others and the personal gratification of doing a useful task. We all have our little joys, such as collecting stamps or being a stickler for good grammar, knowing Jeff Torborg’s college batting average or the order of battle at Trafalgar. These all find a home on Wikipedia.

There is something fundamental, almost primordial at work. Some Wikipedians refer to it as “wiki-crack.” It’s the rush of dopamine that seems to hit the brain’s pleasure center when you make a smart edit and it appears instantly in a Wikipedia article. Until recently, being published was a pleasure afforded only to a select few. Most of us in that category can remember the thrill of seeing our words appear in public for the first time. Wikipedia, like blogs, made that treat avail­able to anyone. You didn’t have to be credentialed or anointed by the media elite.

For example, many of Wikipedia’s articles on the British aristoc­racy were largely written by a user known as Lord Emsworth. They were so insightful about the intricacies of the peerage system that some were featured as the “Article of the Day,” and Lord Emsworth rose to become a Wikipedia administrator. It turned out that Lord Emsworth, a name taken from P. G. Wodehouse’s novels, was actu­ally a 16-year-old schoolboy in South Brunswick, New Jersey. On Wikipedia, nobody knows you’re a commoner.

Connected to that is the even deeper satisfaction that comes from helping to create the information that we use rather than just pas­sively receiving it. “Involvement of people in the information they read,” wrote the Harvard professor Jonathan Zittrain, “is an important end itself.” A Wikipedia that we create in common is more mean­ingful than would be the same Wikipedia handed to us on a platter. Peer production allows people to be engaged.

Jimmy Wales often repeated a simple, inspiring mission for Wiki­pedia: “Imagine a world in which every single person on the planet is given free access to the sum of all human knowledge. That’s what we’re doing.” It was a huge, audacious, and worthy goal. But it badly understated what Wikipedia did. It was about more than people being “given” free access to knowledge; it was also about empowering them, in a way not seen before in history, to be part of the process of creating and distributing knowledge. Wales came to realize that. “Wikipedia allows people not merely to access other people’s knowl­edge but to share their own,” he said. “When you help build some­thing, you own it, you’re vested in it. That’s far more rewarding than having it handed down to you.”

Wikipedia took the world another step closer to the vision pro­pounded by Vannevar Bush in his 1945 essay, “As We May Think,” which predicted, “Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified.” It also harkened back to Ada Lovelace, who asserted that machines would be able to do almost anything, except think on their own. Wikipedia was not about building a machine that could think on its own. It was instead a dazzling example of human-machine symbiosis, the wisdom of humans and the processing power of computers being woven to­gether like a tapestry. When Wales and his new wife had a daughter in 2011, they named her Ada, after Lady Lovelace. Ω

[Walter Isaacson is a writer and biographer and also is the President and CEO of the Aspen Institute. He received a BA (history and literature) from Harvard University and then attended the Oxford University as a Rhodes Scholar at Pembroke College and read philosophy, politics, and economics. In 2014 the National Endowment for the Humanities selected Isaacson for the Jefferson Lecture. In addition to The Innovators, Issacson has written biographies of Henry Kissinger, Benjamin Franklin, Albert Einstein, and Steve Jobs.]

Copyright © 2014 The Daily Beast Company



Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2014 Sapper's (Fair & Balanced) Rants & Raves