Tuesday, June 30, 2009

Getting The Health Care System We Deserve

George Bernard Shaw supposedly said (and he said a lot): "Democracy is a device that ensures we shall be governed no better than we deserve." If he were alive today, GBS would look at our national "debate" about health care reform and say that we will have the health care system that we deserve. If this is (fair & balanced) pessimism, so be it.

[x Salon]
"This Modern World — Here We Go Again"
By Tom Tomorrow (Dan Perkins)

Click on image to enlarge. Ω

Tom Tomorrow/Dan Perkins

[Dan Perkins is an editorial cartoonist better known by the pen name "Tom Tomorrow". His weekly comic strip, "This Modern World," which comments on current events from a strong liberal perspective, appears regularly in approximately 150 papers across the U.S., as well as on Salon and Working for Change. The strip debuted in 1990 in SF Weekly.

Perkins, a long time resident of Brooklyn, New York, currently lives in Connecticut. He received the Robert F. Kennedy Award for Excellence in Journalism in both 1998 and 2002.

When he is not working on projects related to his comic strip, Perkins writes a daily political weblog, also entitled "This Modern World," which he began in December 2001.]

Copyright © 2009 Salon Media Group, Inc.

Get the Google Reader at no cost from Google. Click on this link to go on a tour of the Google Reader. If you read a lot of blogs, load Reader with your regular sites, then check them all on one page. The Reader's share function lets you publicize your favorite posts.

Copyright © 2009 Sapper's (Fair & Balanced) Rants & Raves

Monday, June 29, 2009

Demilitarize The National Anthem!

With the approach of the 4th of July, the electronic media will be filled with bombs bursting in air and the rocket's red glare. Enough with Francis Scott (Off-)Key already! "The Star-Spangled Banner" became the national anthem in 1931 when President Herbert C. Hoover signed the authorization bill enacted by Congress. Hoover should have vetoed the loser song and left well enough alone. It is no coincidence that the tune for the national anthem existed first as a British drinking song ("To Anacreon in Heaven"). 'Tis better to attempt to sing this song while drunk. Snarlin' Mike Kinsley offers several alternatives to the song filled with bombs and rockets and gives a slight edge to

[x YouTube/AmericanMilitary Channel]
"America The Beautiful"
Introduction by Clint Eastwood
Performed by Willie Nelson (and most of Hollywood)



"America The Beautiful." (1895)
Words by Katharine Lee Bates and music by Samuel A. Ward.

O beautiful, for spacious skies,
For amber waves of grain,
For purple mountain majesties
Above the fruited plain!
America! America! God shed His grace on thee,
And crown thy good with brotherhood, from sea to shining sea.


O beautiful, for pilgrim feet
Whose stern, impassioned stress
A thoroughfare for freedom beat
Across the wilderness!
America! America! God mend thine ev'ry flaw;
Confirm thy soul in self control, thy liberty in law!


O beautiful, for heroes proved
In liberating strife,
Who more than self their country loved
And mercy more than life!
America! America! May God thy gold refine,
Till all success be nobleness, and ev'ry gain divine!


O beautiful, for patriot dream
That sees beyond the years,
Thine alabaster cities gleam
Undimmed by human tears!
America! America! God shed His grace on thee,
And crown thy good with brotherhood, from sea to shining sea!

There! Aren't gleaming alabaster cities and the fruited plain better than bombs and rockets? The only sour note here is the proclivity of The BFI to make regular reference to "the fruited plain" during his bloviating performances on his radio show. However, if "American The Beautiful" became the national anthem, The BFI — consummate patriot that he is — would not have the nerve to mock a phrase from the national anthem. Otherwise, The BFI would join the ranks of Roseanne Barr, Jose Feliciano, Michael Bolton, Carl Lewis, and Jimi Hendrix as a desecrator of the national anthem. So, fellow singer/sufferers of the national anthem, let us leave the bombs and the rockets for the amber waves of grain and the purple mountain majesties. If this is (fair & balanced) musicology, so be it.

P.S. Wikipedia tells us that Lynn Sherr's 2001 book America the Beautiful discusses the origins of the song and the backgrounds of its authors in depth. Sherr points out that the poem has the same meter as that of "Auld Lang Syne"; the songs can be sung interchangeably.

[x Washington Fishwrap]
Oh, Say Can You Sing It?
By Michael Kinsley

Tag Cloud of the following article

created at TagCrowd.com

In the Age of Karaoke, more people (including me) like to join in the singing when they strike up the national anthem at public occasions. No one can stop you, no matter how embarrassed she might be by your obvious lack of talent. It's always disappointing when you're invited to stand and enjoy some high school glee club or famous opera singer. But chances are that even the opera singer won't get it right.

"The Star-Spangled Banner" is notoriously unsingable. A professor of music, Caldwell Titcomb of Brandeis, pointed out years ago in the New Republic that its melody spans nearly two octaves, when most people are good for one octave, max. The first eight lines are one enormous sentence with subordinate clauses, leaving no really good place to take a breath. There are far too many mandatory leaps off the high board (". . . what so PROU-dly we hail . . .").

The melody is lifted from an old English drinking song. The lyrics are all about bombs and war and bloodshed — and not in a good way. By the penultimate verse, the song has turned really nasty: "No refuge could save the hireling and slave/From the terror of flight or the gloom of the grave." In the first verse — the one we generally sing — there is only one reference to any value commonly associated with America: "land of the free." By contrast, "home of the brave" is empty bravado. There is nothing in the American myth (let alone reality) to suggest that we are braver than anyone else.

No, "The Star-Spangled Banner" has got to go. The only question is, What should replace it? Here we have an embarrassment of riches. Let's review some of the candidates.

The unimaginative, easy choice would be "My Country, 'Tis of Thee," a.k.a. "America" — as if applying for the job, since the word "America" isn't even in it. Case for: The melody is simple, familiar and easy to sing, with a range of less than an octave. The lyrics express American sentiments, by and large, though with no particular flair. Case against: The tune is a rip-off of "God Save the Queen," and as insipid as the lyrics to boot.

"The Battle Hymn of the Republic" has a range of one octave exactly, and beautiful, inspiring lyrics. A bit martial, of course, but in reference to our nation's greatest cause rather than mindless nonsense about rockets and bombs. A bit religious, too, but probably not unconstitutionally so if "one nation, under God," passes muster in the Pledge of Allegiance. Written by Julia Ward Howe during the Civil War to supply something more wholesome for Union soldiers to sing to the tune of "John Brown's body lies a-moulderin' in the grave," it is already used sometimes at liberal occasions as a substitute for "The Star-Spangled Banner." Even at this late date some Southerners might object. But hey — who won the war?

The best of the conventional choices would be "America the Beautiful." Its range is an octave plus one note, with a couple of tricky leaps ("Uh-MARE-i-cah, America"). But the tune is lovely, and the lyrics are eloquent and almost eerily appropriate in their humility. ("Confirm thy soul in self-control/Thy liberty in law.")

What about Irving Berlin's "God Bless America"? The lyrics are more enthusiastic than eloquent. There is nothing so wonderful about our oceans being "white with foam." But it's a tuneful tune, not only easy to remember but hard to get out of your head. It might seem tough to argue that "God Bless America" is not a religious sentiment, potentially violating the establishment clause of the First Amendment. But the song is so jolly and un-hymn-like that I am confident some professors at our finer law schools could make the case. (You see? That tune just fills you with American optimism and energy.) As this column has pointed out, in our political culture the phrase "God bless America" has come to mean little more than "I'm through with my speech. See you later."

Woody Guthrie wrote "This Land Is Your Land" out of annoyance at the popularity of "God Bless America." The melody has a range of just seven notes, which is hard to beat. The lyrics can be treated as either a generalized appreciation of the American landscape or a more pointed political claim for equality ("This land was made for you and me"). There's no question which one Guthrie had in mind. He was a communist fellow-traveler. But the song has been absorbed into our culture and is loved even by Republicans who have no idea about its origins.

How about Bruce Springsteen's "Born in the USA"? A bit dark for a national anthem, I suppose. The Shaker hymn "Simple Gifts" (turned by Aaron Copland into a theme in "Appalachian Spring")? Have I left out your favorite? Nominations are welcome. Anything would be better than those "bombs bursting in air." Ω

[Michael Kinsley is a political journalist, commentator television host, and liberal pundit. Primarily active in print media as both a writer and editor, he also became known to television audiences as a co-host on CNN's "Crossfire." Kinsley has been a notable participant in the mainstream media's development of online content; Kinsley was the founding editor of Slate. Currently, Kinsley is a columnist for both Time magazine and The Washington Post. Kinsley graduated from Harvard University in 1972. At Harvard, Kinsley served as vice president of the University's daily newspaper, The Harvard Crimson. He was awarded a Rhodes Scholarship and studied at Magdalen College, Oxford, then returned to Harvard for law school. While still a third-year law student, he began working at The New Republic and finished his Juris Doctor degree in the evening program at The George Washington University Law School.]


Copyright © 2009 The Washington Post Company

Get the Google Reader at no cost from Google. Click on this link to go on a tour of the Google Reader. If you read a lot of blogs, load Reader with your regular sites, then check them all on one page. The Reader's share function lets you publicize your favorite posts.

Copyright © 2009 Sapper's (Fair & Balanced) Rants & Raves

Sunday, June 28, 2009

According To The Flatster, The USA (With A Printing Press) = Russia (With Oil Wells)

Thomas (The Flatster) Friedman wrote The Earth Is Flat in '05 and his analysis of the global economy revealed a new (and flat or level) playing field that included China and India as major economic powers. Today, The Flatster offers a suggestion to all of the benighted state governments in the Land O'The Free & The Home O'The Brave: the prerequisite for a state-issued driver's license should be a high school diploma. Drop out and start walkin'. Instead, the yahoos in state capitols spend their time fretting about non-existent voter fraud and the need for a voter ID law. While the yahoos fiddle, dropouts are issued licenses to drive because they can recognize the shape of a stop sign. High school dropouts disdain virtually everything except a driver's license. The dropout rate in each of the states will drop if a high school diploma is the key to gaining the right to drive. Continue to allow pre-graduation age licenses to those drivers younger than 18, but when the driver turns 18: no diploma, no driving. Bring back Nancy Sinatra: "These dropout feet are made for walkin'." If this is a (fair & balanced) idea whose time has come, so be it.

P.S.: The Flatster concludes his screed with a Top 10 List of his own — Thomas Edison to Larry Page. Larry Page?

[x NY Fishwrap]
Invent, Invent, Invent
By Thomas L. Friedman

Tag Cloud of the following article

created at TagCrowd.com

I was at a conference in St. Petersburg, Russia, a few weeks ago and interviewed Craig Barrett, the former chairman of Intel, about how America should get out of its current economic crisis. His first proposal was this: Any American kid who wants to get a driver’s license has to finish high school. No diploma — no license. Hey, why would we want to put a kid who can barely add, read or write behind the wheel of a car?

Now what does that have to do with pulling us out of the Great Recession? A lot. Historically, recessions have been a time when new companies, like Microsoft, get born, and good companies separate themselves from their competition. It makes sense. When times are tight, people look for new, less expensive ways to do old things. Necessity breeds invention.

Therefore, the country that uses this crisis to make its population smarter and more innovative — and endows its people with more tools and basic research to invent new goods and services — is the one that will not just survive but thrive down the road.

We might be able to stimulate our way back to stability, but we can only invent our way back to prosperity. We need everyone at every level to get smarter.

I still believe that America, with its unrivaled freedoms, venture capital industry, research universities and openness to new immigrants has the best assets to be taking advantage of this moment — to out-innovate our competition. But we should be pressing these advantages to the max right now.

Russia, it seems to me, is clearly wasting this crisis. Oil prices rebounded from $30 to $70 a barrel too quickly, so the pressure for Russia to really reform and diversify its economy is off. The struggle for Russia’s post-Communist economic soul — whether it is going to be more OPEC than O.E.C.D., a country that derives more of its wealth from drilling its mines than from tapping its minds — seems to be over for now.

At the St. Petersburg exposition center, showing off the Russian economy, the two biggest display booths belonged to Gazprom, the state-controlled oil and gas company, and Sberbank, Russia’s largest state-owned bank. Russian companies that actually made things that the world wanted were virtually nonexistent: Two-thirds of Russia’s exports today are oil and gas. Gazprom makes the money, and Sberbank lends it out.

As one Western banker put it, when oil is $35 a barrel, Russia “has no choice” but to reform, to diversify its economy and to put in place the rule of law and incentives that would really stimulate small business. But at $70 a barrel, it takes an act of enormous “political will,” which the petro-old K.G.B. alliance that dominates the Kremlin today is unlikely to summon. Too much rule of law and transparency would constrict the ruling clique’s own freedom of maneuver.

China is also courting trouble. Recently — in the name of censoring pornography — China blocked access to Google and demanded that computers sold in China come supplied with an Internet nanny filter called Green Dam Youth Escort, starting July 1. Green Dam can also be used to block politics, not just Playboy. Once you start censoring the Web, you restrict the ability to imagine and innovate. You are telling young Chinese that if they really want to explore, they need to go abroad.

We should be taking advantage. Now is when we should be stapling a green card to the diploma of any foreign student who earns an advanced degree at any U.S. university, and we should be ending all H-1B visa restrictions on knowledge workers who want to come here. They would invent many more jobs than they would supplant. The world’s best brains are on sale. Let’s buy more!

Barrett argues that we should also use this crisis to: 1) require every state to benchmark their education standards against the best in the world, not the state next door; 2) double the budgets for basic scientific research at the National Science Foundation, the Department of Energy and the National Institute of Standards and Technology; 3) lower the corporate tax rate; 4) revamp Sarbanes-Oxley so that it is easier to start a small business; 5) find a cost-effective way to extend health care to every American.

We need to do all we can now to get more brains connected to more capital to spawn more new companies faster. As Jeff Immelt, the chief of General Electric, put it in a speech on Friday, this moment is “an opportunity to turn financial adversity into national advantage, to launch innovations of lasting value to our country.”

Sometimes, I worry, though, that what oil money is to Russia, our ability to print money is to America. Look at the billions we just printed to bail out two dinosaurs: General Motors and Chrysler.

Lately, there has been way too much talk about minting dollars and too little about minting our next Thomas Edison, Bob Noyce, Steve Jobs, Bill Gates, Vint Cerf, Jerry Yang, Marc Andreessen, Sergey Brin, Bill Joy and Larry Page. Adding to that list is the only stimulus that matters. Otherwise, we’re just Russia with a printing press. Ω

[Thomas L. Friedman became The New York Times' foreign-affairs columnist in 1995. He won the 2002 Pulitzer Prize for commentary, his third (The earlier Prizes were awarded in 1983 and 1988.) Pulitzer for this paper. Friedman's latest book, The World is Flat: A Brief History of the 21st Century, (2005) won the inaugural Goldman Sachs/Financial Times Business Book of the Year award. Friedman received a B.A. degree in Mediterranean studies from Brandeis University in 1975. In 1978 he received a Master of Philosophy degree in Modern Middle East studies from Oxford.]

Copyright © 2009 The New York Times Company

Get the Google Reader at no cost from Google. Click on this link to go on a tour of the Google Reader. If you read a lot of blogs, load Reader with your regular sites, then check them all on one page. The Reader's share function lets you publicize your favorite posts.

Copyright © 2009 Sapper's (Fair & Balanced) Rants & Raves

Saturday, June 27, 2009

Hold The Phone! Google Can Make You Smarter?

Smarter or dumber? Is that the question of our age? If this is (fair & balanced) ambivalence, so be it.

[x The Atlantic]
Get Smarter
By Jamais Cascio

Tag Cloud of the following article

created at TagCrowd.com

Seventy-four thousand years ago, humanity nearly went extinct. A super-volcano at what’s now Lake Toba, in Sumatra, erupted with a strength more than a thousand times that of Mount St. Helens in 1980. Some 800 cubic kilometers of ash filled the skies of the Northern Hemisphere, lowering global temperatures and pushing a climate already on the verge of an ice age over the edge. Some scientists speculate that as the Earth went into a deep freeze, the population of Homo sapiens may have dropped to as low as a few thousand families.

The Mount Toba incident, although unprecedented in magnitude, was part of a broad pattern. For a period of 2 million years, ending with the last ice age around 10,000 B.C., the Earth experienced a series of convulsive glacial events. This rapid-fire climate change meant that humans couldn’t rely on consistent patterns to know which animals to hunt, which plants to gather, or even which predators might be waiting around the corner.

How did we cope? By getting smarter. The neuro physi ol ogist William Calvin argues persuasively that modern human cognition—including sophisticated language and the capacity to plan ahead—evolved in response to the demands of this long age of turbulence. According to Calvin, the reason we survived is that our brains changed to meet the challenge: we transformed the ability to target a moving animal with a thrown rock into a capability for foresight and long-term planning. In the process, we may have developed syntax and formal structure from our simple language.

Our present century may not be quite as perilous for the human race as an ice age in the aftermath of a super-volcano eruption, but the next few decades will pose enormous hurdles that go beyond the climate crisis. The end of the fossil-fuel era, the fragility of the global food web, growing population density, and the spread of pandemics, as well as the emergence of radically transformative bio- and nano technologies—each of these threatens us with broad disruption or even devastation. And as good as our brains have become at planning ahead, we’re still biased toward looking for near-term, simple threats. Subtle, long-term risks, particularly those involving complex, global processes, remain devilishly hard for us to manage.

But here’s an optimistic scenario for you: if the next several decades are as bad as some of us fear they could be, we can respond, and survive, the way our species has done time and again: by getting smarter. But this time, we don’t have to rely solely on natural evolutionary processes to boost our intelligence. We can do it ourselves.

Most people don’t realize that this process is already under way. In fact, it’s happening all around us, across the full spectrum of how we understand intelligence. It’s visible in the hive mind of the Internet, in the powerful tools for simulation and visualization that are jump-starting new scientific disciplines, and in the development of drugs that some people (myself included) have discovered let them study harder, focus better, and stay awake longer with full clarity. So far, these augmentations have largely been outside of our bodies, but they’re very much part of who we are today: they’re physically separate from us, but we and they are becoming cognitively inseparable. And advances over the next few decades, driven by breakthroughs in genetic engineering and artificial intelligence, will make today’s technologies seem primitive. The nascent jargon of the field describes this as “ intelligence augmentation.” I prefer to think of it as “You+.”

Scientists refer to the 12,000 years or so since the last ice age as the Holocene epoch. It encompasses the rise of human civilization and our co-evolution with tools and technologies that allow us to grapple with our physical environment. But if intelligence augmentation has the kind of impact I expect, we may soon have to start thinking of ourselves as living in an entirely new era. The focus of our technological evolution would be less on how we manage and adapt to our physical world, and more on how we manage and adapt to the immense amount of knowledge we’ve created. We can call it the Nöocene epoch, from Pierre Teilhard de Chardin’s concept of the Nöosphere, a collective consciousness created by the deepening interaction of human minds. As that epoch draws closer, the world is becoming a very different place.

Of course, we've been augmenting our ability to think for millennia. When we developed written language, we significantly increased our functional memory and our ability to share insights and knowledge across time and space. The same thing happened with the invention of the printing press, the telegraph, and the radio. The rise of urbanization allowed a fraction of the populace to focus on more-cerebral tasks—a fraction that grew inexorably as more-complex economic and social practices demanded more knowledge work, and industrial technology reduced the demand for manual labor. And caffeine and nicotine, of course, are both classic cognitive-enhancement drugs, primitive though they may be.

With every technological step forward, though, has come anxiety about the possibility that technology harms our natural ability to think. These anxieties were given eloquent expression in these pages by Nicholas Carr, whose essay “Is Google Making Us Stupid?” (July/August 2008 Atlantic) argued that the information-dense, hyperlink-rich, spastically churning Internet medium is effectively rewiring our brains, making it harder for us to engage in deep, relaxed contemplation.

Carr’s fears about the impact of wall-to-wall connectivity on the human intellect echo cyber-theorist Linda Stone’s description of “continuous partial attention,” the modern phenomenon of having multiple activities and connections under way simultaneously. We’re becoming so accustomed to interruption that we’re starting to find focusing difficult, even when we’ve achieved a bit of quiet. It’s an induced form of ADD—a “continuous partial attention-deficit disorder,” if you will.

There’s also just more information out there—because unlike with previous information media, with the Internet, creating material is nearly as easy as consuming it. And it’s easy to mistake more voices for more noise. In reality, though, the proliferation of diverse voices may actually improve our overall ability to think. In Everything Bad Is Good for You, Steven Johnson argues that the increasing complexity and range of media we engage with have, over the past century, made us smarter, rather than dumber, by providing a form of cognitive calisthenics. Even pulp-television shows and video games have become extraordinarily dense with detail, filled with subtle references to broader subjects, and more open to interactive engagement. They reward the capacity to make connections and to see patterns—precisely the kinds of skills we need for managing an information glut.

Scientists describe these skills as our “fluid intelligence”—the ability to find meaning in confusion and to solve new problems, independent of acquired knowledge. Fluid intelligence doesn’t look much like the capacity to memorize and recite facts, the skills that people have traditionally associated with brainpower. But building it up may improve the capacity to think deeply that Carr and others fear we’re losing for good. And we shouldn’t let the stresses associated with a transition to a new era blind us to that era’s astonishing potential. We swim in an ocean of data, accessible from nearly anywhere, generated by billions of devices. We’re only beginning to explore what we can do with this knowledge-at-a-touch.

Moreover, the technology-induced ADD that’s associated with this new world may be a short-term problem. The trouble isn’t that we have too much information at our fingertips, but that our tools for managing it are still in their infancy. Worries about “information overload” predate the rise of the Web (Alvin Toffler coined the phrase in 1970), and many of the technologies that Carr worries about were developed precisely to help us get some control over a flood of data and ideas. Google isn’t the problem; it’s the beginning of a solution.

In any case, there’s no going back. The information sea isn’t going to dry up, and relying on cognitive habits evolved and perfected in an era of limited information flow—and limited information access—is futile. Strengthening our fluid intelligence is the only viable approach to navigating the age of constant connectivity.

When people hear the phrase intelligence augmentation, they tend to envision people with computer chips plugged into their brains, or a genetically engineered race of post-human super-geniuses. Neither of these visions is likely to be realized, for reasons familiar to any Best Buy shopper. In a world of on going technological acceleration, today’s cutting-edge brain implant would be tomorrow’s obsolete junk—and good luck if the protocols change or you’re on the wrong side of a “format war” (anyone want a Betamax implant?). And then there’s the question of stability: Would you want a chip in your head made by the same folks that made your cell phone, or your PC?

Likewise, the safe modification of human genetics is still years away. And even after genetic modification of adult neurobiology becomes possible, the science will remain in flux; our understanding of how augmentation works, and what kinds of genetic modifications are possible, would still change rapidly. As with digital implants, the brain modification you might undergo one week could become obsolete the next. Who would want a 2025-vintage brain when you’re competing against hotshots with Model 2026?

Yet in one sense, the age of the cyborg and the super-genius has already arrived. It just involves external information and communication devices instead of implants and genetic modification. The bioethicist James Hughes of Trinity College refers to all of this as “exo cortical technology,” but you can just think of it as “stuff you already own.” Increasingly, we buttress our cognitive functions with our computing systems, no matter that the connections are mediated by simple typing and pointing. These tools enable our brains to do things that would once have been almost unimaginable:

• powerful simulations and massive data sets allow physicists to visualize, understand, and debate models of an 11‑dimension universe;
• real-time data from satellites, global environmental databases, and high-resolution models allow geophysicists to recognize the subtle signs of long-term changes to the planet;
• cross-connected scheduling systems allow anyone to assemble, with a few clicks, a complex, multimodal travel itinerary that would have taken a human travel agent days to create

If that last example sounds prosaic, it simply reflects how embedded these kinds of augmentation have become. Not much more than a decade ago, such a tool was outrageously impressive—and it destroyed the travel-agent industry.

That industry won’t be the last one to go. Any occupation requiring pattern-matching and the ability to find obscure connections will quickly morph from the domain of experts to that of ordinary people whose intelligence has been augmented by cheap digital tools. Humans won’t be taken out of the loop—in fact, many, many more humans will have the capacity to do something that was once limited to a hermetic priesthood. Intelligence augmentation decreases the need for specialization and increases participatory complexity.

As the digital systems we rely upon become faster, more sophisticated, and (with the usual hiccups) more capable, we’re becoming more sophisticated and capable too. It’s a form of co-evolution: we learn to adapt our thinking and expectations to these digital systems, even as the system designs become more complex and powerful to meet more of our needs—and eventually come to adapt to us.

Consider the Twitter phenomenon, which went from nearly invisible to nearly ubiquitous (at least among the online crowd) in early 2007. During busy periods, the user can easily be overwhelmed by the volume of incoming messages, most of which are of only passing interest. But there is a tiny minority of truly valuable posts. (Sometimes they have extreme value, as they did during the October 2007 wildfires in California and the November 2008 terrorist attacks in Mumbai.) At present, however, finding the most-useful bits requires wading through messages like “My kitty sneezed!” and “I hate this taco!”

But imagine if social tools like Twitter had a way to learn what kinds of messages you pay attention to, and which ones you discard. Over time, the messages that you don’t really care about might start to fade in the display, while the ones that you do want to see could get brighter. Such attention filters—or focus assistants—are likely to become important parts of how we handle our daily lives. We’ll move from a world of “continuous partial attention” to one we might call “continuous augmented awareness.”

As processor power increases, tools like Twitter may be able to draw on the complex simulations and massive data sets that have unleashed a revolution in science. They could become individualized systems that augment our capacity for planning and foresight, letting us play “what-if” with our life choices: where to live, what to study, maybe even where to go for dinner. Initially crude and clumsy, such a system would get better with more data and more experience; just as important, we’d get better at asking questions. These systems, perhaps linked to the cameras and microphones in our mobile devices, would eventually be able to pay attention to what we’re doing, and to our habits and language quirks, and learn to interpret our sometimes ambiguous desires. With enough time and complexity, they would be able to make useful suggestions without explicit prompting.

And such systems won’t be working for us alone. Intelligence has a strong social component; for example, we already provide crude cooperative information-filtering for each other. In time, our interactions through the use of such intimate technologies could dovetail with our use of collaborative knowledge systems (such as Wikipedia), to help us not just to build better data sets, but to filter them with greater precision. As our capacity to provide that filter gets faster and richer, it increasingly becomes something akin to collaborative intuition—in which everyone is effectively augmenting everyone else.

In pharmacology, too, the future is already here. One of the most prominent examples is a drug called modafinil. Developed in the 1970s, modafinil—sold in the U.S. under the brand name Provigil—appeared on the cultural radar in the late 1990s, when the American military began to test it for long-haul pilots. Extended use of modafinil can keep a person awake and alert for well over 32 hours on end, with only a full night’s sleep required to get back to a normal schedule.

While it is FDA-approved only for a few sleep disorders, like narcolepsy and sleep apnea, doctors increasingly prescribe it to those suffering from depression, to “shift workers” fighting fatigue, and to frequent business travelers dealing with time-zone shifts. I’m part of the latter group: like more and more professionals, I have a prescription for modafinil in order to help me overcome jet lag when I travel internationally. When I started taking the drug, I expected it to keep me awake; I didn’t expect it to make me feel smarter, but that’s exactly what happened. The change was subtle but clear, once I recognized it: within an hour of taking a standard 200-mg tablet, I was much more alert, and thinking with considerably more clarity and focus than usual. This isn’t just a subjective conclusion. A University of Cambridge study, published in 2003, concluded that modafinil confers a measurable cognitive-enhancement effect across a variety of mental tasks, including pattern recognition and spatial planning, and sharpens focus and alertness.

I’m not the only one who has taken advantage of this effect. The Silicon Valley insider webzine Tech Crunch reported in July 2008 that some entrepreneurs now see modafinil as an important competitive tool. The tone of the piece was judgmental, but the implication was clear: everybody’s doing it, and if you’re not, you’re probably falling behind.

This is one way a world of intelligence augmentation emerges. Little by little, people who don’t know about drugs like modafinil or don’t want to use them will face stiffer competition from the people who do. From the perspective of a culture immersed in athletic doping wars, the use of such drugs may seem like cheating. From the perspective of those who find that they’re much more productive using this form of enhancement, it’s no more cheating than getting a faster computer or a better education.

Modafinil isn’t the only example; on college campuses, the use of ADD drugs (such as Ritalin and Adderall) as study aids has become almost ubiquitous. But these enhancements are primitive. As the science improves, we could see other kinds of cognitive-modification drugs that boost recall, brain plasticity, even empathy and emotional intelligence. They would start as therapeutic treatments, but end up being used to make us “better than normal.” Eventually, some of these may become over-the-counter products at your local pharmacy, or in the juice and snack aisles at the supermarket. Spam e-mail would be full of offers to make your brain bigger, and your idea production more powerful.

Such a future would bear little resemblance to Brave New World or similar narcomantic nightmares; we may fear the idea of a population kept doped and placated, but we’re more likely to see a populace stuck in overdrive, searching out the last bits of competitive advantage, business insight, and radical innovation. No small amount of that innovation would be directed toward inventing the next, more powerful cognitive-enhancement technology.

This would be a different kind of nightmare, perhaps, and cause waves of moral panic and legislative restriction. Safety would be a huge issue. But as we’ve found with athletic doping, if there’s a technique for beating out rivals (no matter how risky), shutting it down is nearly impossible. This would be yet another pharmacological arms race—and in this case, the competitors on one side would just keep getting smarter.

The most radical form of superhuman intelligence, of course, wouldn’t be a mind augmented by drugs or exocortical technology; it would be a mind that isn’t human at all. Here we move from the realm of extrapolation to the realm of speculation, since solid predictions about artificial intelligence are notoriously hard: our understanding of how the brain creates the mind remains far from good enough to tell us how to construct a mind in a machine.

But while the concept remains controversial, I see no good argument for why a mind running on a machine platform instead of a biological platform will forever be impossible; whether one might appear in five years or 50 or 500, however, is uncertain. I lean toward 50, myself. That’s enough time to develop computing hardware able to run a high-speed neural network as sophisticated as that of a human brain, and enough time for the kids who will have grown up surrounded by virtual-world software and household robots—that is, the people who see this stuff not as “Technology,” but as everyday tools—to come to dominate the field.

Many proponents of developing an artificial mind are sure that such a breakthrough will be the biggest change in human history. They believe that a machine mind would soon modify itself to get smarter—and with its new intelligence, then figure out how to make itself smarter still. They refer to this intelligence explosion as “the Singularity,” a term applied by the computer scientist and science-fiction author Vernor Vinge. “Within thirty years, we will have the technological means to create superhuman intelligence,” Vinge wrote in 1993. “Shortly after, the human era will be ended.” The Singularity concept is a secular echo of Teilhard de Chardin’s “Omega Point,” the culmination of the Nöosphere at the end of history. Many believers in Singularity—which one wag has dubbed “the Rapture for nerds”—think that building the first real AI will be the last thing humans do. Some imagine this moment with terror, others with a bit of glee.

My own suspicion is that a stand-alone artificial mind will be more a tool of narrow utility than something especially apocalyptic. I don’t think the theory of an explosively self-improving AI is convincing—it’s based on too many assumptions about behavior and the nature of the mind. Moreover, AI researchers, after years of talking about this prospect, are already ultra-conscious of the risk of runaway systems.

More important, though, is that the same advances in processor and process that would produce a machine mind would also increase the power of our own cognitive-enhancement technologies. As intelligence augmentation allows us to makeourselves smarter, and then smarter still, AI may turn out to be just a sideshow: we could always be a step ahead.

So what's life like in a world of brain doping, intuition networks, and the occasional artificial mind?

Banal.

Not from our present perspective, of course. For us, now, looking a generation ahead might seem surreal and dizzying. But remember: people living in, say, 2030 will have lived every moment from now until then—we won’t jump into the future. For someone going from 2009 to 2030 day by day, most of these changes wouldn’t be jarring; instead, they’d be incremental, almost overdetermined, and the occasional surprises would quickly blend into the flow of inevitability.

By 2030, then, we’ll likely have grown accustomed to (and perhaps even complacent about) a world where sophisticated foresight, detailed analysis and insight, and augmented awareness are commonplace. We’ll have developed a better capacity to manage both partial attention and laser-like focus, and be able to slip between the two with ease—perhaps by popping the right pill, or eating the right snack. Sometimes, our augmentation assistants will handle basic interactions on our behalf; that’s okay, though, because we’ll increasingly see those assistants as extensions of ourselves.

The amount of data we’ll have at our fingertips will be staggering, but we’ll finally have gotten over the notion that accumulated information alone is a hallmark of intelligence. The power of all of this knowledge will come from its ability to inform difficult decisions, and to support complex analysis. Most professions will likely use simulation and modeling in their day-to-day work, from political decisions to hairstyle options. In a world of augmented intelligence, we will have a far greater appreciation of the consequences of our actions.

This doesn’t mean we’ll all come to the same conclusions. We’ll still clash with each other’s emotions, desires, and beliefs. If anything, our arguments will be more intense, buttressed not just by strongly held opinions but by intricate reasoning. People in 2030 will look back aghast at how ridiculously unsubtle the political and cultural disputes of our present were, just as we might today snicker at simplistic advertising from a generation ago.

Conversely, the debates of the 2030s would be remarkable for us to behold. Nuance and multiple layers will characterize even casual disputes; our digital assistants will be there to catch any references we might miss. And all of this will be everyday, banal reality. Today, it sounds mind-boggling; by then, it won’t even merit comment.

What happens if such a complex system collapses? Disaster, of course. But don’t forget that we already depend upon enormously complex systems that we no longer even think of as technological. Urbanization, agriculture, and trade were at one time huge innovations. Their collapse (and all of them are now at risk, in different ways, as we have seen in recent months) would be an even greater catastrophe than the collapse of our growing webs of interconnected intelligence.

A less apocalyptic but more likely danger derives from the observation made by the science-fiction author William Gibson: “The future is already here, it’s just unevenly distributed.” The rich, whether nations or individuals, will inevitably gain access to many augmentations before anyone else. We know from history, though, that a world of limited access wouldn’t last forever, even as the technology improved: those who sought to impose limits would eventually face angry opponents with newer, better systems.

Even as competition provides access to these kinds of technologies, though, development paths won’t be identical. Some societies may be especially welcoming to biotech boosts; others may prefer to use digital tools. Some may readily adopt collaborative approaches; others may focus on individual enhancement. And around the world, many societies will reject the use of intelligence-enhancement technology entirely, or adopt a cautious wait-and-see posture.

The bad news is that these divergent paths may exacerbate cultural divides created by already divergent languages and beliefs. National rivalries often emphasize cultural differences, but for now we’re all still standard human beings. What happens when different groups quite literally think in very, very different ways?

The good news, though, is that this diversity of thought can also be a strength. Coping with the various world-histori cal dangers we face will require the greatest possible insight, creativity, and innovation. Our ability to build the future that we want—not just a future we can survive—depends on our capacity to understand the complex relationships of the world’s systems, to take advantage of the diversity of knowledge and experience our civilization embodies, and to fully appreciate the implications of our choices. Such an ability is increasingly within our grasp. The Nöocene awaits. Ω

[Jamais Cascio is an affiliate at the Institute for the Future and a senior fellow at the Institute for Ethics and Emerging Technologies. Cascio has degrees in Anthropology, History and Political Science, but information about the awarding institution(s) is fugitive.]

Copyright © 2009 by The Atlantic Monthly Group

Get the Google Reader at no cost from Google. Click on this link to go on a tour of the Google Reader. If you read a lot of blogs, load Reader with your regular sites, then check them all on one page. The Reader's share function lets you publicize your favorite posts.

Copyright © 2009 Sapper's (Fair & Balanced) Rants & Raves

Friday, June 26, 2009

The "Decadent & Sick" Dumbos Deserve A Good Rock Upside Their Heads!

The Book of Leviticus has it right. Let's start chunkin' rocks at the Dumbos. Every last one of them and their fellow-travelers who espouse racial hatred and religious hatred. Chunk rocks at every Neo-Nazi, Klan-member, and clinic-bomber who run with the Dumbos. Thank God (of your choice) that rocks are plentiful. Come face-to-face with The BFI or BillO The Clown? Start lookin' for a rock to chunk. This blogger is sick'n tired of the sanctimonious hypocrisy. Give new meaning to "Rock The Vote!" Just call this blogger "Kid Rock." Every Dumbo should get stoned (literally). If this is (fair & balanced) petrology, so be it.

[x Salon]
Remind Me: Which Political Party Is "Decadent" And "Sick"?
By Joe Conason

Tag Cloud of the following article

created at TagCrowd.com

Whenever the latest Republican politician is caught with his zipper undone, a predictable moment of introspection on the right inevitably ensues. Pundits, bloggers and perplexed citizens ruminate over the lessons they have learned, again and again, about human frailty, false piety and the temptations of flesh and power. They express concern for the damaged family and lament the fall of yet another promising young hypocrite. They resolve to restore the purity of their movement and always remember to remind us that this is all Bill Clinton's fault. What they never do is face up to an increasingly embarrassing fact about themselves and their leaders.

They're really just liberals in right-wing drag.

The proof is in the penance, or lack thereof, inflicted on the likes of Mark Sanford, John Ensign and David Vitter, to cite a few names from the top of a long, long list. For ideologues who value biblical morality and believe in the efficacy of punishment, modern conservatives are as tolerant of their famous sinners as the jaded libertines of the left. Even after confessing to the most flagrant and colorful fornication, the worst that a conservative must anticipate is a stern scolding, followed by warm assurances of God's forgiveness and a swift return to business as usual.

Mark Sanford may have forfeited his presidential ambitions, but the South Carolina governor seems determined to hold onto his office despite his escapade in Argentina — and if he is thrown out, the reason will be his offenses against good government rather than his betrayal of his marriage vows. John Ensign isn't expected to step down from the Senate, despite the mounting evidence that he concealed his extramarital affair through the misuse of public funds; even now he remains more popular than fellow Nevadan Harry Reid, the Democratic majority leader. And then there is David Vitter, the Louisiana bon vivant whose evangelical constituents seem inclined to reward him for consorting with prostitutes by giving him another Senate term. The safest prediction is that these pharisaical pols will continue their careers without suffering the retribution they have earned.

According to the Old Testament — a text regularly cited by these worthies as the highest authority in denouncing reproductive freedom and gay rights — the proper penalty for adultery is death by stoning. Leviticus is quite clear on this point (as any truly strict originalist could hardly deny). Fortunately for all of us, biblical law doesn't rule this country, despite the zealots on the religious right who disdain separation of church and state. Very few Americans believe that we should impose state sanctions, let alone the death penalty, on private peccadilloes. But civic tolerance doesn't excuse the limp, smiling attitude of the Republican right toward the infidelity of its leaders.

That flabby acceptance contrasts sharply with right-wing screaming about the iniquity of the opposition. As understood by conservative commentators, this is not mere rhetoric but a theory of civilization's rise and fall. Ann Coulter believes that liberals actively "seek to destroy morality" by "refusing to condemn what societies have condemned for thousands of years," including "promiscuity" and "divorce." Dinesh D'Souza once recommended sarcastically that the Democrats adopt the mantle of "moral degeneracy" by forthrightly advocating "divorce, illegitimacy, adultery, homosexuality, bestiality and pornography."

The supposed depravity of the Democratic Party has long been a favorite theme of conservatives, dating back to the rise of Newt Gingrich, who distributed an official campaign lexicon to Republican congressional candidates that featured such defining insults as "decadent," "permissive," "sick," "selfish" and, of course, "liberal." Back then the Georgia Republican was on his second marriage and carrying on a clandestine affair with the young Capitol Hill clerk who would eventually become his third wife (after he converted to Catholicism and had his union with wife No. 2 annulled). In 2007, he admitted on James Dobson's radio show that he was cheating on wife No. 2 with future wife No. 3 while he was publicly chastising President Clinton for consorting with Monica Lewinsky. Gingrich has remained a consistent favorite among his pious comrades.

Today, in fact, Gingrich is fully rehabilitated as a party spokesman, still nurturing presidential ambitions. So why should any other Republican fear the wrath of the righteous? The disappointment in Sanford and Ensign among the devout must be particularly keen, since they have so rigorously aligned themselves with the most fervent elements of the religious right.

For more than a decade, Ensign lent his name to Promise Keepers, the all-male Christian prayer movement run by a former Colorado football coach, whose mass rallies highlighted men's integrity, purity and uncompromising domination of family life. Both he and Sanford have worked closely with the Family, a secretive Christian fellowship on Capitol Hill that maintains a brick townhouse where Ensign and other members of Congress have resided. Over the years both men have won the highest marks from the Family Research Council, the Christian Coalition and the American Family Association — and until the other day, Sanford was featured as an invited speaker at the Family Research Council's upcoming Values Voters Summit 2009. (As Pam Spaulding and Think Progress noted, however, the FRC removed his photo from the summit Web site immediately following his confessional press conference.)

Certainly there is considerable pressure for Sanford to resign in South Carolina, and perhaps he will surrender. But he might well ask whether that is fair when Ensign is hanging on and Vitter appears to be in the clear. For a while, Family Research Council president Tony Perkins had threatened to challenge Vitter in the Republican primary next year, but last March he announced that he won't run after all — and instead endorsed Vitter for reelection. Amazingly, Perkins then hosted a radio broadcast with Vitter as his guest, where they tut-tutted over the alleged ethical problems of Health and Human Services Secretary Kathleen Sebelius. Nobody had the poor taste to mention the infamous black books in which Vitter's friendly madams in Washington and New Orleans had inscribed his name and phone number.

By the way, while Vitter, Ensign, Gingrich and perhaps Sanford have been able to retain their positions and political viability, the same cannot be said for the most recent offenders on the progressive side. Neither Eliot Spitzer nor John Edwards, each among the most promising figures in the Democratic Party, will ever be a candidate for public office again, although their misbehavior was no worse than what their Republican counterparts did.

If they looked honestly at themselves, religious conservatives might notice that they are morally lax, socially permissive and casually tolerant of moral deviancy — just like the liberals they despise. So as they wonder aloud why the same salacious nightmare haunts them, year after year, the best advice they can get happens to come from that old sinner Clinton. As he so often says, the definition of insanity is to keep doing the same thing while expecting a different outcome. Ω

[Joe Conason writes a weekly column for Salon and the New York Observer. Conason received a B.A. in History from Brandeis University in 1975. He then worked at two Boston-based newspapers, East Boston Community News and The Real Paper. From 1978 to 1990, he worked as a columnist and staff writer at The Village Voice. From 1990 to 1992, Conason was "editor-at-large" for Details magazine. In 1992, he became a columnist for the New York Observer, a position he still holds. Conason has written a number of books, including Big Lies (2003), which addresses what he says are myths spread about liberals by conservatives. His new book is It Can Happen Here: Authoritarian Peril in the Age of Bush.]

Copyright © 2009 Salon Media Group, Inc.

Get the Google Reader at no cost from Google. Click on this link to go on a tour of the Google Reader. If you read a lot of blogs, load Reader with your regular sites, then check them all on one page. The Reader's share function lets you publicize your favorite posts.

Copyright © 2009 Sapper's (Fair & Balanced) Rants & Raves

Marcy Meets Dante At The Inner Ring Of The 7th Circle Of Hell & It's Priceless

In The Divine Comedy (1321) Dante Alighieri places the usurers in the inner ring of the seventh circle of Hell, below even suicides. (Showing how cultural attitudes have changed since the 14th century, the usurers' ring was shared only by the blasphemers and sodomites.) Today's usurers are embodied by Mastercard, Visa, and their ilk. If this is (fair & balanced) disdain of predatory lending, so be it.

[x YouTube/VersusPlus Channel]
"Where Credit Is Due"
By Marcy Shaffer
Parody of "Lady Marmalade" (1974) — Words and Music by Bob Crewe and Kenny Nolan



Tag Cloud of the following lyrics
created at TagCrowd.com

OOO, CREDIT.
WHO LED IT?
WHO SPREAD IT?
WHO FED IT?
OOO, CREDIT.
WHO LED IT?
WHO SPREAD IT?
WHO FED IT

HIS FIRST MASTERCARD CAME WHEN HE WAS A LAD.
SENT TO HIS DORM WITH AN AD.
IT SAID: "HELLO, NEW GRAD!
WE KNOW THERE'S STUFF YOU WANT BAD."

"ITCHIN' FOR SOME BITCHIN' GOODIES?
THAT BEWITCHIN' IPOD STARRED?
ABERCROMBIE FITCHIN' HOODIES?"
HE OK'D THE MASTERCARD.

PLASTIC IS FANTASTIC TO GET, NO SWEAT.
PLASTIC IS FANTASTIC, YOU BET.

CAJOLED TO GO GOLD.
HE ENROLLED.
THEY SAID: "HERE:
HAVE NO CAREER?
NEVER FEAR.
LIVE LARGE, CHARGE ALL YOUR GEAR.
JUST PAY THE MINIMUM, DEAR."

"WHY NOT TRY THOSE CASH ADVANCES?
WHY NOT BUY A SAINT BERNARD?
WHY NOT FLY WHEREVER FRANCE IS?"
HE OBEYED THE MASTERCARD.

PLASTIC IS FANTASTIC TO LET YOU JET.
PLASTIC IS FANTASTIC, NET NET.

THE DATE HE PAID LATE MADE HIS RATE INFLATE.
TEN PERCENT TO TWENTY-EIGHT.
COLLECTION AGENCIES GALORE
TREKKED TO SQUEEZE THIS OBLIGOR.
FOR
MORE.
MORE!

NOW HE HAS SHREDDED HIS CREDIT SCORE.
NOW HE IS POOR TO THE CORE.
STILL IT'S ALL BUT FOR SURE.
HE'LL DETOUR TO SOME STORE.
FOR
MORE.
MORE!

HE, HE DIDN'T SEE THE KEY FAULT.
CHICHI LUXURY COMES HARD.
HE IS LEGALLY IN DEFAULT.
BE AFRAID OF MASTERCARD.

PLASTIC CAN BE DRASTIC FOR DEBT NOT MET.
PLASTIC CAN BE DRASTIC REGRET.

PLASTIC CAN BE DRASTIC TO DISREGARD.
DISASTER TO MISS MASTERCARD. Ω

[Janis Liebhart - Lead Vocal, Background Vocal
Angie Jarée - Background Vocal
Gary Stockdale - Background Vocal
Greg Hilfman - Music Director]

℗ © 2009 RMSWorks Lyrics © 2009 RMSWorks

Get the Google Reader at no cost from Google. Click on this link to go on a tour of the Google Reader. If you read a lot of blogs, load Reader with your regular sites, then check them all on one page. The Reader's share function lets you publicize your favorite posts.

Copyright © 2009 Sapper's (Fair & Balanced) Rants & Raves