Thursday, October 31, 2013

In Today's Pop Quiz, The Correct Answer Is... A !!!!!!!!!!

Yesterday, we learned that hating is fun. Today, we learn to hate the Confederate battle flag and every knuckle-draggin', bottom-feeder (mostly hDumbos & Morons) who wave it or fix decals of it on their vehicle bumpers. The BIG MISTAKE occurred after Appomattox. The government of the United States or America should have tried, convicted, and executed every Confederate from Jefferson Davis to field-grade officers in the rebel army. Every last one of them should have gone to the wall and faced a firing squad. That would have included a posthumous sentence on the likes of Thomas J. "Stonewall" Jackson. Away with every one of them from the "Lost Cause." They were traitorous losers and deserved that end. To hell with binding up old wounds and acting without malice and granting charity for all. The scum and their descendants deserve our contempt and enmity. As Representative Pete Sessions (R-TX) recently said to the POTUS 44 in the White House: I cannot even stand to look at you" — this blogger would rather look at cockroaches breeding than gaze upon a single Dumbo or Moron or their advocates. That includes Representative Pete Sessions. Scum deserve no respect. If this is a (fair & balanced) response to treason in our midst, so be it.

[x TNR]
I Used To Hang A Confederate Flag On My Wall & Here's Why I Took It Down
By Chuck Thompson

Tag Cloud of the following piece of writing

created at TagCrowd.com

Here’s a handy M/C test for measuring the tenor of your political orthodoxy:

The Confederate flag
a) Should be outlawed for its association with treason, slavery and other forms of oppression
b) Is a complicated yet important piece of southern heritage popularly associated with courage, sacrifice and honor
c) Flies proudly in front of many public buildings in the South and if you don’t like it you’re either a socialist or a community organizer
d) Looks awesome when rendered as a bikini top with a pair of Daisy Dukes

Your answer to that question likely corresponds with the reaction you had a couple weeks ago when a Confederate flag showed up in front of the White House at a Tea Party protest during the federal government shutdown.

Or last week when Mother Jones revealed Mississippi Senate candidate Chris McDaniel had spoken before a conference run by a neo-Confederate organization that promotes modern secession.

Because on occasion I’ve written about the South in a somewhat critical fashion, I’m often asked about the flag that North Carolina scholar John Shelton Reed has called “one of the most divisive, hurtful symbols in American history.”

With respect to Reed, I’d remove the qualifying “one of the” language from that appraisal.

At the same time, like anyone intelligent enough to hold competing ideas in his head at the same time, I understand that however jauntily they might be presented, each of the responses to that question above is at least partially valid.

If the Confederate flag is divisive, it’s because the issues surrounding it bisect the politics and passions of everyone from public mucous snorters and Hannity huffers to gluten-averse liberal artistes and run of the mill defenders of freedom, justice and a soccer-free America, such as myself.

In other words, everyone.

This is why I wish progressives would grow up a little and stop with the phony outrage every time some yahoo nursing the old eternal Dixie grudge shows up at a rally wrapping his cause in the standard of Johnny Reb—as though they’d never before borne witness to mouth-breather trailer wrath and are at a loss to explain why it’s such a powerful agent of dissent.

It’s also why I wish rage-a-holic righties would grow up a lot, resist the intoxicating rush of their weekly pre-adolescent temper tantrums and govern themselves with logic and whatever portion of New Testament charity they apparently missed on their first 9,000 readings of The Bible. (Just kidding on that last one, I know y’all don’t actually read that book.)

GALLANTLY STREAMING

Like it or not, the Confederate flag is part of a national heritage shared by us all. For anyone who imagines its legacy is confined to the South—or that there’s anything even remotely new about the manner in which it’s presently being used as a symbol of antipathy and defiance—here’s a quick review of the muddy declension of events that brought the Confederate standard to Obama’s doorstep.

The rebel flag first resurfaced in an official capacity after the Civil War when the state of Georgia adopted a version of the Confederacy’s original Stars and Bars (not the “Confederate flag” as it’s commonly known today) as its state flag in 1879. In the 1890s, Mississippi and Alabama fashioned state flags based on the St. Andrew’s cross battle flag, the one we now see rendered on everything from trucker caps to the contemporary Mississippi flag. These state flags “served notice that Reconstruction was over and that white Democrats were now once again in control,” writes University of Georgia history professor and former president of the Southern Historical Association, James C. Cobb, in Away Down South: A History of Southern Identity (2007).

In 1916, just a year after a fiery ritual at Stone Mountain, Georgia, proclaimed the resurgence of the Ku Klux Klan, a massive 30 x 50 foot Confederate flag was draped over the mountain’s stark granite face by way of announcing the creation of a memorial to Confederate glory. The infamous carving of southern royalty (Stonewall Jackson, Robert E. Lee, Jefferson Davis) that today adorns Stone Mountain is one of Georgia’s most popular tourist attractions.

According to the Encyclopedia of Southern Culture, however, the flag “did not reach its great popularity until the 1950s, possibly owing to widespread southern white dissatisfaction with the federal government.” (Sound familiar?) The flag thrived as an emblem of Old South bigotry through Jim Crow, Jerry Falwell and the Kenyan-Muslim socialist takeover of the White House.

There’s another side to the flag, of course, one that proponents insist stands for the positive attributes of an irrepressible heritage. According to this reading, the flag has transcended its original battlefield purpose of allowing soldiers fighting on behalf of plutocrats hell-bent on preserving a slave-based economy to distinguish themselves from the soldiers fighting on behalf of a government desperately trying to preserve its Union.

The most insightful elucidation of the unexpected metamorphosis from slave-state battle flag to qausi-national badge of patriotic glory to sheet of stubborn resentment is found in Tony Horwitz’s Pulitzer Prize-winning Confederates in the Attic (1999): “The banner (seems) to have floated free from its moorings in time and place and become a generalized ‘Fuck You,’ a middle finger raised with ulceric fury in the face of blacks, school officials, authority in general—anyone or anything that could shoulder some blame for [southerners’] difficult lives.”

Confusing things further, the flag that Horwitz also describes as a “talisman against mainstream culture” has become such a powerful symbol of all-around anti-authoritarianism that it’s been adopted, at one time or another, by independence factions in Israel, Northern Ireland and Soviet Georgia.

However tempting it is to think that everyone who displays that flag is a secret (or not so secret) racist, it’s simply not true. With the exception of a miniscule percentage of deviants—though let us never underestimate the power of miniscule deviants—none of the people who wave that flag or stick it on the bumper of their F-250 do so because they’re advocating the re-imposition of slavery across the land.

BROAD STRIPES AND BRIGHT STARS

Time for a confession.

From the summer that I was 12 until I was about 14, a large Confederate flag hung on a wall in my bedroom. I’d picked up the flag as a souvenir during a summer vacation that included a visit to relatives in Georgia, a sweltering August death march up Stone Mountain and multiple viewings of “Smokey and the Bandit.”

In either the hot slog up Stone Mountain or Burt Reynolds’ cross-country beer run, it was impossible to escape the pungent aroma of Dixie pathos. Jackson, Lee and Davis were heroic figures of internecine defiance. The South was the Bandit. The United States was Smokey. All of them were cartoons, but no one could confuse the cool ones for the constipated authority stiffs. And no 12-year-old could confuse the ones with which he was intended to identify (assuming he wasn’t a future Young Republican).

There’s an elemental appeal in that flag, something everyone who’s ever flown it intrinsically understands: it’s really fucking cool. Ask Robert E. Lee. Ask Lynryd Skynrd. Ask Kanye West (of all people) who’s currently flogging tour T-shirts emblazoned with the Confed flag. It conveys an undeniable juju.

And it’s been that way from the start. As chronicled by southern flag scholar Robert Bonner, on his first glimpse of the flag in 1861, a London Times correspondent named William Howard Russell was mesmerized by the way “these pieces of coloured bunting seem to twine themselves through heart and brain.”

That Confederate flag wasn’t precisely “mine,” but by sailing it in my bedroom, I was making an immature attempt to confer upon myself not only a place in America’s proud pantheon of iconoclastic outliers, but a bit of the mule-headed nobility of the self-styled mutineer—not to mention the red clay dignity of the American commoner. I might have been living the middle class life in Southeast Alaska, but as Blake Shelton and Trace Adkins would observe several decades later, “We all got a hillbilly bone down deep inside.”

Physically distant though it may be, Alaska then as now had much in common with the South. Not two decades removed from statehood (1960) and the massive land and resource grab that commenced with the discovery of oil in Prudhoe Bay in 1968, there was at the time plenty of casual talk in certain beery quarters about the travesty of compromise that came with statehood and economic surrender to a government thousands of miles and cultural light years away.

The Alaska Independence Party was a fringe but nevertheless tolerated part of the political landscape. At eighteen, I briefly toyed with an impulse to check AIP on my voter registration form before ticking the box next to “Independent.”

There’s honor in sticking to your guns, in not selling out, in remaining a rebel forever. By eighteen, however, I was already coming to the realization that there’s also a great measure of immaturity—and, more dangerous, a toxic mixture of self-pity and self-destruction—in clinging to willful obstinacy for obstinacy’s sake.

A lot happened in the few years after I picked up that flag—I read a few books, expanded my experiences, began to think about the world not as a place that owed me something but as an organism that I was a part of.

I grew up.

Somewhere along the way that flag disappeared.

PERILOUS FIGHT

If throwing our hands in the air and surrendering to a future of bitter national schizophrenia is unsatisfactory (it is for me), how then do we begin to repair the common bond that divides us?

After the requisite cost-benefit analysis (Is what we’re gaining by keeping an insurgent spirit alive worth what we’re losing in national unity?) every time I turn the flag issue over I invariably come back to the same elemental question: Isn’t it time we grew up and stopped being such a bunch of self-destructive assholes over this?

The most obvious solution is also the most difficult one: do with the Confederate flag what Germany did to the swastika after World War II—make it illegal. Ban it from public display, outside of museums and select academic settings. Yes, the First Amendment protects the right of citizens to fly the flag, but if an act of Congress isn’t possible, at the very least relegate the flag and its promoters to the social status of the N-word.

With one dramatic stroke of a pen, a progressive southern government could sound the purest note of harmony this country has heard since the ones played on those mythic piccolos at Valley Forge. While progressive southerners are at it, they might as well sandblast that gaudy tribute to slave masters on Stone Mountain. Better yet, let a squadron southern flyboys in F-22s use it for missile practice.

Do this, of course, and you really would have a full-scale rebellion on your hands. Riots. Barricades. Bombs. If nothing else that’d prove that the war between the states never really has stopped.

None of this, of course, is for me to decide. Predictably, it’s not even for those most adversely affected to decide—that being the one group who will never see anything patriotic or noble or even remotely cool in that flag. A group for whom it stands largely if not exclusively for the perpetuation of the idea of white supremacy and a glorification of the history of subjugation and dehumanization endured by their ancestors. (Kanye and his ill-informed minions excluded.)

In Away Down South, Cobb quotes black Mississippi radio personality and entrepreneur Rip Daniels, who with one eloquent statement summarizes the poisoned past and dysfunctional future we can expect so long the rebel flag remains such a vital symbol.

“You leave me no choice but to be your enemy as long as you wave a battle flag,” says Daniels. “If it is your heritage, then it is my heritage to resist it with every fiber of my being.”

Look at that question at the top of this page again. Every answer might be correct, but only one of them is right. Until the majority of southerners agree on which one that is—and actually do something about it—the national divide we face will remain just as strong as the symbol that fuels it. Ω

[Raised in Juneau, Alaska, Chuck Thomson graduated from the University of Oregon with a double-BA (history and journalism); he has lived in Japan, Hong Kong, New York City, Dallas, and Portland, OR, and traveled on assignment in more than 50 countries. Currently editorial director for CNNGo.com, Thompson's most recent book is Better Off Without ‘Em: A Northern Manifesto For Southern Secession (2012).]

Copyright © 2013 The New Republic



Creative Commons License
Sapper's (Fair & Balanced) Rants & Raves by Neil Sapper is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License. Based on a work at sapper.blogspot.com. Permissions beyond the scope of this license may be available here.



Copyright © 2013 Sapper's (Fair & Balanced) Rants & Raves

Wednesday, October 30, 2013

Q: Why Have Nearly 4,000 Posts Appeared In This Blog? A: Because Hating Dumbos & Morons Is FUN!!!

Rachel Toor is absolutely right: hating is fun. That is why this blog is chock-full of hate for Dumbos, Morons, and other bottom-feeders. Besides, this blogger cannot get on his roof to scream his angry words. Ditto for a streetcorner location. This blog is a virtual rooftop or streetcorner. If this is a (fair & balanced) defense of vituperation, so so be it.

[x CHE]
On The Pleasure Of Hating
By Rachel Toor

Tag Cloud of the following piece of writing

created at TagCrowd.com

My brother started his career in the corporate world, moved into human resources, and then made the switch to Legal Aid. Recently he got a job in academic administration. Even though we'd grown up with professor parents, I suspected he had no idea what he was in for.

My best advice to help him understand academe was that he read various Web sites and blogs on higher education. I didn't mean the reported articles, opinion pieces, advice columns, or blog posts, though of course all of those would provide good information and insight. No, I told him, if you want to get a bead on the academy, read the anonymous online comments.

There you can find, in full and flamboyant flower, all that is good about contemporary life in universities. You see people passionate about ideas, thinking on the (electronic) page, responding to and pushing forward the arguments of others. In the comments section you find nuanced critiques: well-written mini-essays that are so smart, and often funny, that you sometimes wish they were longer. You see generous sharing of best practices in the classrooms by teachers who truly care about their students and are looking for ways to improve and to help others.

Read the comments section and you'll come to understand you're not alone if you move for a job to a town that you can't learn to love, or if you sometimes struggle to keep your cool during long and windy committee meetings. You'll find a big and expansive community and think: So this is what it's like to have great colleagues, people who've chosen to forsake money, prestige, and power in order to live in a world where being a nerd is cool.

But that is, unfortunately, not all you will find in the comments. You'll also find the haters.

And they are everywhere. Nasty online comments, cyberbullying, and Twitter wars are a vexing and familiar part of daily life. Even on the comments pages of clothing catalogs and in the help forums for computer woes, you will see people chime in with personal and ugly attacks. Along with the many gifts brought to us by the Web, it has also allowed people who previously had no access to the communicative reins of power to scream from their (still powerless) rooftops.

As an editor at Oxford and Duke University Presses, I encountered few trolls. Maybe that's because nearly everyone I met was in some measure successful. They were doing research and, if they were talking to me, were well on their way to publishing. I saw little of the academic horror stories I read in David Lodge novels and other dysphoric fictional portraits of higher education. Though the reader's reports I commissioned from experts in various fields could be critical, rarely were they unkind or intemperate. The authors didn't know the identity of the readers, but I did. Maybe that kept down the level of nastiness, or maybe the civility was a by-product of using confident reviewers with secure positions, but it seemed amazing that for so little real-world currency, established scholars would devote such intellectual capital toward evaluating the projects of other, often more junior, researchers.

Perhaps my rosy view of academic civility was due to my own naïveté and privileged position as an editor. But now, after working as a faculty foot soldier and writing for publications in higher education, my perception has changed.

Academics are not the only anonymous online bile-spewers, but they may be the most multisyllabically, source-referencingly verbose. Something in me wants people with advanced educations to be better than the lugheads who write the barely intelligible nasty anonymous comments on other sites. I dream, with an innocence I cling to, that academics can be better than the teens who bully each other into depression and suicide. I want our students to see us as examples—not only of how to write and how to argue, but of how to behave.

And in fact, I am frequently struck by how smart and legitimate the critiques are. At times I feel the conversations in the comments are more important than the essay to which they are responding. It can be an arena of real intellectual exchange. I wonder, though, if there's something peculiar about the way academics argue. Someone recently reminded me about an old New Yorker cartoon that shows half a dozen hominids circled around. One says, "This meeting has been called to discuss the meat. It has been pointed out that there is no more meat. A motion has been made to fight over the bones." Online commenters often seem to squabble because they're poised to squabble. They can't help it.

Is it because when resources are scarce in academe, we can expect bitterness to bloat like bread dough?

The "two cultures" of academe used to refer to the sciences and the humanities. Is the great divide now between the haves and the have-nots—those with the T-word in their job description and those who are "contingent"? Many of the most scathing comments seem to be from Ph.D.'s who are unable to get the jobs they want. I don't blame them for being upset; the job market stinks. But I do wish they could find a more productive outlet for their energy and anger than their fellow Ph.D.'s who got lucky enough to find a tenure-track job.

The tone—OMG the tone! If only the literate folks who contribute comments to Web sites, especially academic ones, would keep their criticism sharp and temper the tone. Surely they are more careful when they respond to their students' work. Why be less civil and thoughtful when engaging with colleagues? Why not follow the rhetorical advice we give our apprentices? That is, you need to make a fair and well-reasoned argument if you want anyone to pay attention to you.

Maybe people who post vitriolic comments are those whose careers haven't worked out as they had planned. Maybe the comments section is stacked with graduate students, who, in their overworked, disenfranchised state, can seem perpetually angry and are still learning norms of professional behavior. Maybe more reasonable and less angry readers move along without leaving a trace and thus the discussion can tilt toward those eager to express resentment and outrage. Maybe the anonymous online comments are so ugly because no one has to take responsibility or suffer consequences for what they write.

I understand the reasons behind anonymous readers' reports, but I never write anything unless I'm prepared to sign my name. In order to participate in the peer-review process, you have to have done something that makes you a peer; you have to have verifiable chops in the field. If people had to post under their real names the level of discourse would rise and little of value would be lost. Comments like "Grow up!" or "You need help!" might disappear. Petty, personal battles between commenters might not go on for so many posts. Perhaps the challenge is to write as if your identity will be known, even if you choose to use a pseudonym.

Nearly 10 years ago a computer glitch on Amazon revealed the identity of anonymous reviewers. The results were sadly predictable. Prominent authors were giving themselves five-star reviews, and personal axes were being ground. We might expect bad behavior from those wacky artists. They can be oh-so-flighty and temperamental; they have passions. In an interview about the flap, the author Jonathan Franzen said, "When I've been tempted to write a nasty review online, I have never had attractive motives."

Maybe that's it: It's human, all too human, to hate. In his 1823 essay, "On the Pleasure of Hating," William Hazlitt wrote:

"The pleasure of hating, like a poisonous mineral, eats into the heart of religion, and turns it to rankling spleen and bigotry; it makes patriotism an excuse for carrying fire, pestilence, and famine into other lands: it leaves to virtue nothing but the spirit of censoriousness, and a narrow, jealous, inquisitorial watchfulness over the actions and motives of others."

Hazlitt delves deep into his own hatreds. He has many—and many that we all, if we're honest, share—and he explores them in a way that makes me love him. "But so it is, that there is a secret affinity, a hankering after, evil in the human mind," he writes, "and that it takes a perverse, but a fortunate delight in mischief, since it is a never-failing source of satisfaction."

When I think about the haters, about the teen bulliers and the slut-shamers, about the folks for whom someone's byline alone can be enough to provoke venomous online comments, I have to remind myself: They do it because it's fun. Ω

[Rachel Toor is an associate professor of creative writing at Eastern Washington University's writing program in Spokane. Toon received a BA from Yale University and an MFA from the University of Montana. Prior to joining the faculty of Eastern Washington University, she spent a dozen years as an acquisitions editor at Oxford and Duke University Presses.]

Copyright © 2013 The Chronicle of Higher Education



Creative Commons License
Sapper's (Fair & Balanced) Rants & Raves by Neil Sapper is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License. Based on a work at sapper.blogspot.com. Permissions beyond the scope of this license may be available here.



Copyright © 2013 Sapper's (Fair & Balanced) Rants & Raves

Tuesday, October 29, 2013

The JFK Assassination Had Another Victim

Adam Gopnik's survey of Kennedy assassination books brought this blogger back to lunchtime, November 23, 1963. This blogger was returning his lunch tray at the dish room counter when Charley, the dishwasher, reported that his radio on the dishroom window sill reported that JFK had been shot in Dallas and was reported dead. The blogger was marking time as a teacher at a prep school in suburban Denver; waiting for the travel orders from The Company. Back in August before the Dream Speech, this blogger had spent a week in DC as an applicant for a job with The Company. His mail contact was with a fictitious junior-grade Army officer in The Pentagon (later identitfied as a CIA-mail-drop). Between late August and November 23, 1963, the blogger was being evaluated for security clearance. Then, bang! LBJ immediately imposed a federal hiring freeze and the blogger heard little or nothing from the fictitious correspondent at The Pentagon. In a few months, the blogger embarked on graduate study that culminated in an MA in history in June 1965. With that ticket in hand, the blogger received a job-offer from a juco on the Illinois banks of The Big Muddy. That summer, as Nam was heating up, the blogger received a phone call from a gravel-voiced retired general who wanted to meet the blogger to discuss covert work with The Company. The blogger replied, "Thanks, but no thanks." End of the JFK episode in this blogger's life. His last memory of The Company came at his exit interview with his handler. After being cautioned to use the cover story about upcoming employment, the handler said, "We need to settle up on your per diem. The Company paid for air travel and included a $50 per day stipend. The handler reached into a bottom drawer of his desk and brought out metal cash box (like those used by ticket-sellers at the fair or high school football games) and extracted a combination of two C-notes and five Sawbucks ($250) and handed it to this blogger as the interview ended. If this was a (fair & balanced) illustration of "In God We Trust, All Others Pay Cash," so be it.

[x New Yorker]
Closer Than That
By Adam Gopnik

Tag Cloud of the following piece of writing

created at TagCrowd.com

Poets are not the unacknowledged legislators of the world, lucky for us, but they can be worldly judges of poetic legislators. Lincoln’s soul survives in Whitman’s words, and the response of American poets to the assassination of John F. Kennedy, fifty years ago, suggests that there really was, beyond the hype and the teeth, an interesting man in there. An entire volume of mostly elegiac poems, "Of Poetry and Power," with a Rauschenberg silk-screen portrait of the President for its cover, came out within months of his murder. (It was even recorded, complete, on Folkways Records.)

John Berryman wrote a “Formal Elegy” for the President (“Yes. it looks like wilderness”); Auden an “Elegy for J.F.K.,” originally accompanied by twelve-tone music by Stravinsky. Robert Lowell—who in the Second World War had gone to prison as a conscientious objector, and in the late sixties became a Pentagon-bashing radical hero—wrote to Elizabeth Bishop that the murder left him “weeping through the first afternoon,” and then “three days of television uninterrupted by advertising till the grand, almost unbearable funeral.” The country, he said, “went through a moment of terror and passionate chaos.” Lowell’s friend and fellow-poet Randall Jarrell called it the “saddest” public event that he could remember. Jarrell tried to write an elegy but could get no further than “The shining brown head.”

This passionate chaos was set loose, then, in every back yard. It is easy to be cynical about it in retrospect—being cynical about it in retrospect is by now a branch of American historical studies—and say that the poets’ overwrought grief was the product of a sleight of hand worked by Jackie, no other group so easily bought as American writers. (Even the Salingers were invited to the White House—and Mrs. Salinger wanted to go!) But there was more than that. The death of J.F.K. marked the last time the highbrow reaches of the American imagination were complicit in the dignity of the Presidency. In Norman Mailer’s Presidential Papers (1964), also published soon after Kennedy’s death, the point is that there was a “fissure in the national psyche,” a divide between the passionate inner life of America and its conformist, repressed official life: “The life of politics and the life of myth had diverged too far.” For Mailer, Kennedy’s Presidency supplied the hope of an epiphany wherein the romantic-hero President would somehow lead his people on an “existential” quest to heal this breach. It sounded just as ridiculous then, but there was something gorgeous in the absurdity.

Of course, people made fun of Kennedy—the Kennedy impersonator Vaughn Meader was the single biggest loser after the assassination. (“Poor Vaughn Meader,” Lenny Bruce is said to have muttered in his standup act on the night of the killing.) And the John Birch right wingers hated him as implacably as their children do Obama. But the king always has his fool, and the haters were largely marginalized. Lowell wondered what character in Shakespeare Bobby, the dour younger brother, most resembled. Finding Shakespearean dimensions in politicians was an accepted sport. This kind of contemplation became increasingly incredible in the years that followed. (L.B.J. could be Macbeth, but only as the burlesque MacBird [1967].) Reagan and Clinton were both larger-than-life figures drawn from simpler American entertainments—Mr. Deeds and the Music Man, the wise innocent in power or the lovable fast-talking con man who turns out to be essential to everyone’s happiness. Kennedy, by contrast, was still seen as a king of divine right out of the seventeenth century—the subject of endless reverie about his capacity to renew the world. And so the obsession with his body, that shining head, recalling the seventeenth-century French court watching the King sleep and rise and defecate, leads in the end to the grisly conspiracy-theory compulsion to review every square inch of his autopsied body. (One conspiracy theorist, David Lifton, said once that he never married because every would-be bride realized that he was more interested in the President’s dead body than in her living one.)

The nation really did get turned inside out when Kennedy was killed, as nations do at the death of kings. But what altered? In many ways, it was a time more past than present. Though it’s said that the event marked the decisive move from page to screen, newspaper to television, all the crucial information was channelled through the wire-service reporters, who, riding six cars back from the President’s, were the first to get and send the news of the shots, and were still thought of as the authoritative source. Walter Cronkite’s two most famous moments—breaking into “As the World Turns” to announce, “In Dallas, Texas, three shots were fired”; and his later, holding-back-tears “From Dallas, Texas, the flash, apparently official: President Kennedy died at 1 p.m. Central Standard Time”—were in both cases simply read from the wire-service copy. You can see the assistants ripping the copy from the teleprinter and rushing it to the anchorman.

Yet an imbalance between the flood of information and the uncertainty of our understanding—the sense that we know so much and grasp so little, and that reality becomes an image passing—does seem to have begun then: the postmodern suspicion that the more we see, the less we know. A compulsive “hyperperspicacity,” in the term of one assassination researcher—the tendency to look harder for pattern than the thing looked at will ever provide—became the motif of the time. To dive into the assassination literature fifty years on—to read the hundreds of books, with their hundreds of theories, fingering everyone from Melvin Belli to the Mossad; to visit Dealey Plaza on trips to Dallas; and to venture in the middle of the night onto the assassination forums and chat rooms—is to find two truths overlaid. The first truth is that the evidence that the American security services gathered, within the first hours and weeks and months, to persuade the world of the sole guilt of Lee Harvey Oswald remains formidable: ballistics evidence, eyewitness evidence, ear-witness evidence, fingerprint evidence, firearms evidence, circumstantial evidence, fibre evidence. The second truth of the assassination, just as inarguable, is that the security services collecting that evidence were themselves up to their armpits in sinister behavior, even conspiring with some of the worst people in the world to kill the Presidents of other countries. The accepted division of American life into two orders—an official one of rectitude, a seedy lower order of crime—collapses under scrutiny, like the alibi in a classic film noir.

“Know why you couldn’t figure this one, Keyes?” the guilty Walter Neff (Fred MacMurray) tells his virtuous insurance colleague Barton Keyes (Edward G. Robinson) at the end of the great “Double Indemnity,” in a taunting confession. “I’ll tell ya. Because the guy you were looking for was too close. Right across the desk from ya.” Keyes’s beautiful, enigmatic rejoinder is: “Closer than that, Walter.” He means that the cop and the killer share more than they knew before the crime, that temptations that lead to murder are available to us all; the lure of transgression makes us closer than we think.

These two truths lead you not so much to different claims as to different worlds. Every decade or so, the Oswald-incriminating facts are comprehensively reviewed—most recently by Vincent Bugliosi, in a thousand-plus-page volume, Reclaiming History (2007)—and, every decade, people who don’t care tend to accept those facts, while the people who care most remain furious and unpersuaded. The world of the conspiracy buffs has a bibliography and a set of fixed points that run parallel to but separate from reality as it is usually conceived. The buffs, for instance, rely heavily on the memoir of Madeleine Brown, who claims to have been one of L.B.J.’s mistresses, and to have been told by him, the night before the murder, “Those goddam Kennedys will never embarrass me again!” The buffs debate whether she is wholly, largely, or only sporadically reliable. In the latest volume of Robert Caro’s L.B.J. biography, by contrast, Brown is not thought worth mentioning, even to disprove. (In any case, the key conspiracy scene she paints, a kind of pre-assassination party at the millionaire Clint Murchison’s Dallas house, attended by Johnson, J. Edgar Hoover, and Richard Nixon, has been conclusively debunked. No record of it exists in any Dallas newspaper, and Johnson can be safely placed in Houston that night.) In the same way, the buffs take for granted the role of Joseph Kennedy, first as a bootlegger, then as a campaign fund-raiser for his son entangled with the Mafia, and argue about whether the Mafia alone was the killer or the Mafia in league with the C.I.A. Joe Kennedy’s guilty past is the entire pivot of the assassination in a new conspiracy book, ominously titled The Poison Patriarch (2013), by Mark Shaw (Skyhorse); and the same idea is dramatized in the screenwriter William Mastrosimone’s Broadway-bound play “Ride the Tiger.” Yet David Nasaw’s recent, far-from-admiring biography of old Joe (2013) dismisses as complete legend the notion that he ever made a penny as a bootlegger or worked closely with the Mob. (He made his money in Hollywood and on Wall Street, mobs of their own.)

Bugliosi handles the conspiracy theorists with a relentless note of sarcastic condescension. But there are ways in which the pattern-seeking is a meaningful index of the event, and gives us more insight into its hold fifty years on than the evidence does. A web without a spider still catches the light. There are distinct period styles in paranoia. The first generation of assassination obsessives—Josiah Thompson, still writing; Harold Weisberg, long dead—were essentially hopeful proceduralists, men and women with thick files and endless clippings, convinced that due scrutiny of the record would reveal sufficient inconsistencies, opacities, and falsehoods to compel the reopening of the entire case. Their model was journalists of the I. F. Stone kind, the isolated man of integrity who could find the truth by scrutinizing the record.

The second kind of assassination obsessive emerged only later, in the mid-seventies. Where the proceduralists believe that the truth is in there, buried in some forgotten file folder, the fantasists believe, “X Files” style, that the truth is out there—available to those bold enough to imagine on the right scale of American extravagance. An exemplar here was David Lifton’s book Best Evidence, published in 1981, but his theories percolated at lectures and conferences throughout the seventies. He put forward an obviously mad idea with admirable logic: that the President’s body was secreted away between the killing and the autopsy, and his wounds altered.

The paradox is that, just as Thomas Pynchon or Don DeLillo dramatizes paranoia with a texture of specificity, the paranoid types are, in their own way, often much more empirically minded—willing to follow the evidence where it leads, even if that is right through the looking-glass—than their more cautious confrères. It is, in other words, possible to construct an intricate scenario that is both cautiously inferential, richly detailed, on its own terms complete, and yet utterly delusional. The J.F.K. conspiracy theorists are the first and hardiest of those movements—the truthers and birthers and moon walkers being their stepchildren—in which the old American paranoid style, once largely marginal and murmuring, married pseudoscience and became articulate, academic, systematized, and loud.

No matter how improbable it may seem that all the hard evidence could have been planted, faked, or coerced—and that hundreds of the distinct acts of concealment and coercion necessary would have been left unconfessed for more than half a century—it does not affect the production of assassination literature, which depends not on confronting the evidence but on discovering new patterns of connection and coincidence. The buffs’ books—Lamar Waldron and Thom Hartmann’s Legacy of Secrecy (2008), in development as a major Hollywood film, is a perfect instance—lay out ever more intricate and multiple patterns of apparent intention and reaction among Mafia dons and C.I.A. agents, all pointing toward Dealey Plaza. “Had ties with . . .” is the favored phrase, used to connect with sinister overtones any two personalities within the web. Waldron and Hartmann dismiss even Oswald’s murder of the Dallas police officer J. D. Tippit, forty-five minutes after J.F.K.’s assassination, despite the many witnesses who saw him shoot Tippit, or identified him as the man with the gun running from the scene.

Arguments like this tend to lead toward the same cul-de-sac, where the skeptic insists on being shown the spider and the buffs insist that it is enough to point to the web. One argument can stand for a hundred like it: a key early piece of evidence for conspiracy is that many of the doctors in the emergency ward at Parkland Memorial Hospital, where the President was brought from the fatal motorcade, said that they saw a large wound to the back of J.F.K.’s head, instead of the right front side, where the later autopsy and X-rays locate it. This is not really hard to explain. The wound was enormous, and the doctors never examined it, or turned J.F.K. over to verify that there was a rear head wound. The Zapruder film of the assassination shows, unmistakably, that the horrible wound was indeed to the right front side of his skull, while the back remained intact (aside from the small, almost invisible entrance wound).

So for the claim of a “rear head wound” to be accurate, it would be necessary for the Zapruder film to have somehow been altered and turned into a cunning animated cartoon. That is exactly what the “second generation” of theorists insist—that the Zapruder film itself is a fabrication, produced, in the words of one buff, “in a sophisticated C.I.A. photo lab at the Kodak main industrial plant in Rochester, New York.” Nor is this idea simply asserted. It is patiently argued, step by step, with the name of the optical printer detailed, even though Kodak’s own expert on 8-mm. film, Roland Zavada, has dismissed the idea of introducing complex optical-printer effects onto 8-mm. film in 1963, and declared that “there is no detectable evidence of manipulation or image alteration on the Zapruder in-camera original and all supporting evidence precludes any forgery thereto.” A theory that has the Zapruder film altered is absurd—but a theory that doesn’t have the Zapruder film altered has to accept that Kennedy had no rear-exit head wound, and therefore must have been shot from above and behind.

This constant cycle of sense and speculation is not about to end. Josiah Thompson, one of the most rational of the skeptics, wrote once that “you pull any single thread, any single fact, and you’re soon besieged with a tangle of subsidiary questions.” And this is true: any fact asserted can be met with a counter-fact—some of them plausible, many disputed, most creating contradictions that are unresolvable. But this is not a fact about conspiracies. It is a fact about facts. All facts in all inquiries come at us with their own shakiness, their own shimmer of uncertainty. The threads of evidence usually seem separate and sure only because life mostly comes at us in finished fabrics, and nothing requires us to pull the thread. When we do, whenever we do, there’s a tangle waiting.

Bugliosi makes this point in a practical, prosecutor’s spirit, saying that, once you are sure of the conclusion, you have to live with the evidentiary inconsistencies: you may not know the answer to a question, but that does not mean that the question is unanswerable. To take one of many that arise in the assassination case: much used to be made of the mysterious “three tramps” who were arrested shortly after the shots. They turned out to be, after long years of speculation . . . three tramps, with knowable names and mundane histories. It is a safe, though not a certain, bet that the remaining mysteries will resolve just as mundanely. In the meantime, though, every fact in the case, no matter how solid-seeming, can be countered by some other fact, however speculative. Facts provoke new patterns even as they disprove old ones.

Yet the foundational sense that there were bizarre forces at work in the period, paranoid and violent and tightly interlocked in the strangest imaginable ways, and by their nature resistant to the common-sense impulses of ordinary explanation—this is, as far as one can tell, true. As J.F.K. himself is claimed to have said, apropos of the then popular coup-d’état thriller “Seven Days in May,” such a coup in the United States was far from being unthinkable: “It’s possible. It could happen in this country. But the conditions would have to be right. If, for example, the country had a young president and he had a Bay of Pigs, there would be a certain uneasiness. . . . Then if there were another Bay of Pigs, the reaction of the country would be, ‘Is he too young and inexperienced?’ The military would almost feel that it was their patriotic obligation to stand ready to preserve the integrity of the nation, and only God knows just what segment of democracy they would be defending if they overthrew the elected establishment.” (He added that he intended it not to happen “on his watch.”)

By J.F.K.’s own accounting, the Bay of Pigs was the first failure. In the eyes of the national-security hawks, the Cuban missile crisis, though presented to the public as a showdown that Kennedy won, was the second, an exercise in abject appeasement. Kennedy had refused the unanimous advice of his generals and admirals to bomb Cuba, and had settled the crisis by giving the Russians what they wanted, the removal of missiles from Turkey. (This was kept quiet, but the people who knew knew.) The notion that the Cold War national-security state, which Eisenhower warned against, might have decided to kill the President is not as difficult to credit as one wishes. There were C.I.A. operatives prepared to kill foreign leaders, some of them previously friendly, for acts they didn’t like, and to recruit gangsters to do it, and generals who were eager to invade Cuba even at the risk of nuclear war, and who resented Kennedy for restraining them. (A veteran journalist, Jefferson Morley, has been pursuing the trail of a now dead C.I.A. agent named George Joannides through a Freedom of Information Act lawsuit, believing that, at a minimum, the C.I.A. was keeping a much sharper eye on Oswald than it ever wanted known. Relevant documents are supposed to be released in 2017.)

Oddly, there’s confirmation of this in the work of the Kennedy brothers’ house historian, Arthur Schlesinger, Jr. An establishment figure devoted to maintaining the image of the Kennedys, and no friend to the conspiracy theorists, Schlesinger made plain that the Kennedys really did believe themselves to be subject to a hostile alliance of the military and the C.I.A., largely outside their direct control. “Intelligence operatives, in the CIA as well as the FBI, had begun to see themselves as the appointed guardians of the Republic, infinitely more devoted than transient elected officials, morally authorized to do on their own whatever the nation’s security demanded,” Schlesinger concludes. Ted Sorensen, another Kennedy intimate, wrote in his memoir that when Jimmy Carter nominated him, in 1977, to be the director of central intelligence, agency officials worked furiously (and successfully) to get the nomination withdrawn, quite possibly because there was evidence about J.F.K.’s death that they didn’t want him to see. Vincent Bugliosi’s confidence that these things don’t happen here isn’t shared by those closest to the case.

An assassination should be significant for more than its atmospherics. Kennedy’s should also matter for people who weren’t there, because something happened in America that would not have happened had Kennedy lived. The conventional claim is that optimistic liberalism died in Dallas. Ira Stoll, in his new book, J.F.K, Conservative (2013), makes this claim in reverse: he believes that the path of true conservatism would have gone more smoothly if Kennedy had not been killed. Stoll sincerely believes that Kennedy’s spiritual heir was Reagan, while shifty Nixon was the real liberal, whose heir is—who else?—shifty Obama.

Of course, every American President is in some sense a conservative—there are no Léon Blums or Salvador Allendes in our record. But Kennedy was a classic Cold War liberal: someone who believed in confronting the Communists (nonviolently, if at all possible) and creating a network of social welfare to relieve social anxiety. The real conservatives of the time, the John Birch Society and the Goldwater wing of the Republican Party, believed in confronting Communism violently, and in abjuring any federal programs of civil rights and social welfare, since these were certainly left-wing and possibly Communist. (Ronald Reagan, after all, came to notice for crusading against Medicare, the way his successors crusade against Obamacare.) Unable to explain why the actual right-wingers hated J.F.K. as much as they did, Stoll insists that a conspiracy of leftish doves who surrounded J.F.K.—Sorensen and Schlesinger, in particular—warped his words and purposes retrospectively: a conspiracy theory every bit as loony as any from the buffs.

At the other end of the spectrum, Thurston Clarke, in his new book, J.F.K.’s Last Hundred Days (2013), argues passionately that J.F.K. was moving ever more decisively left, flapping his wings like a dove, just before he was killed. The evidence is that Kennedy began to argue, more loudly than he had before, that American politicians should do everything possible to avoid provoking a nuclear holocaust that would destroy civilization. One would think this a minimal ground of sanity, rather than a radical departure from orthodoxy—but, as Clarke reminds us, driving to the very edge of universal destruction was widely seen as an opportunity to outsmart the Soviets. Conversations about how many million casualties the United States could endure were not just material for “Dr. Strangelove.” More specifically, the line goes, Kennedy was planning to get out of Vietnam by the end of 1965, or at least had made up his mind not to get drawn any farther in. Accounts of private conversations and notes from National Security Council meetings are played as cards in this game. Jeff Greenfield, in his new counterfactual book If Kennedy Lived (2013), asserts, along with many other larksome predictions (the Beatles would have gone to the White House; Ronald Reagan would have got the Republican nomination in 1968), that J.F.K. would never have escalated the war in Vietnam.

It is hard to take these claims as much more than wishful thinking projected retrospectively onto a pragmatic politician, whose commitment to Cold War verities, while less nihilistic than that of some others, was still complete. It’s true that Kennedy was not inclined, as his two immediate successors were, to see foreign affairs as a series of challenges to his manhood; a true war hero, he truly hated war. But though the compulsions of personality are strong, the logic of American politics is stronger. Kennedy might well have felt little of the insecurity that troubled Johnson’s soul as he escalated the war. But exactly the same political circumstances would have confronted him. Had the North Vietnamese Army been allowed to march into Saigon in 1965 instead of in 1975, the Goldwater Republicans would not have said, “Thank God for Kennedy’s wisdom in not wasting tens of thousands of American lives and millions of Vietnamese ones in an effort to stop what was sure to happen in any case!” They would have said, “Another country, another region, fecklessly lost to Communism, and on your watch!” The truth, that the fate of Vietnam, of crucial importance to the Vietnamese, was of little consequence to America, or to its struggle with the Soviet Union, was simply a taboo statement on every side.

Paranoid as the period was, it was in ways more open. Oswald’s captors decided that he would have to be shown to the press, and arranged a midnight press conference for him—not something that would happen today—while a lawyer for the Warren Commission met at length with a Communist pushing a conspiracy theory. (One doubts that a 9/11 commissioner ever felt obliged to meet with a truther.) The national-security state might have been in place, but the national-surveillance state wasn’t, quite.

Oswald was a kind of wooden pawn of the Cold War era who seemed always on the verge of being sacrificed. As a teen-ager, he educated himself as a Marxist, and he remained a fantasist who feasted on James Bond novels—just like the President!—and subscribed to both mainline Communist and Trotskyite papers, without ever really grasping the difference between them. When he decided to flee, as a teen-age marine, to what he imagined to be the socialist paradise of Russia, the K.G.B. seemed so bewildered that it sent him off to work in a factory in Minsk, and watched him as unhappily as the American security services did later.

Once again, the problem is not an absence of intelligence; the problem is having too much intelligence to add up intelligently. Another thousand Oswalds, long since lost to time, were under scrutiny, too. To take a specific instance: the man whom Oswald sat next to on the bus to Mexico City turned out to be, certainly unknown to him, a con man and onetime fanatical Hitler supporter named Albert Osborne. Osborne earned an appendix in the Warren report; he appears briefly and then vanishes into history again. Had he shot someone, we would ask what he was doing there, and why no one knew more about him than about the odd, long-forgotten defector Oswald. Oswald’s life reminds us that modernity in America, with its rootless wanderings and instant connections, permanent dislocations and endless reinventions, is a kind of coincidence machine, generating two or three degrees of separation between the unlikeliest of fellows.

What is true of Oswald is true as well of his own assassin, that lesser mystery figure Jack Ruby. Ruby is cast in the buff literature as a sinister Mafia hit man, there to silence Oswald before he could speak. (The killing of Hyman Roth, in “The Godfather Part II,” seems modelled on Ruby’s act.) Jack Ruby did seek out Mafia-connected characters in the months before the assassination—but he seems to have been trying to get help to put pressure on the American Guild of Variety Artists to enforce its rules about using unpaid strippers. (He considered his rivals’ amateur striptease shows to be unfair competition to the polished pro acts at his own joint.)

Again and again, the investigation discloses bizarre figures and coincidences within a web of incident that seem significant in themselves. The case of Judith Campbell Exner is famous. She really was J.F.K.’s mistress, and a Sinatra girlfriend, and the mistress of the Chicago Mob boss Sam Giancana, all within a few years. Even if she wasn’t actually a go-between from one to the other, that would not alter the reality that she had slept with all three, and so lived in worlds that, in 1963, no one would have quite believed could penetrate each other so easily. Still more startling is the case of the painter Mary Pinchot Meyer, who was also unquestionably one of Kennedy’s mistresses. She was the ex-wife of a high-ranking C.I.A. officer (who himself had once had pacifist leanings), an intimate of Timothy Leary, at Harvard, and an LSD user. She was murdered, in 1964, on the towpath in D.C., in murky circumstances. Even if none of this points toward a larger occult truth—even if her death was just a mugging gone wrong—the existence of such a figure says something about the weave of American experience. Worlds that seemed far apart at the time are now shown to have been close together, unified by men and women of multiple identities, subject to electric coincidences—no one more multiple than J.F.K. himself, the prudent political pragmatist who was also the reckless erotic adventurer, in bed with molls and Marilyns, and maybe even East German spies.

The passion of J.F.K. may lie in the overlay of all those strands and circles. The pattern—weaving and unweaving in front of our eyes, placing unlikely people in near proximity and then removing them again—is its own point. Mailer was right when he claimed that the official life of the country and the real life had come apart, but who could have seen that it would take a single violent act, rather than “existential” accomplishment, to reveal how close they really were? Oswald acted alone, but the hidden country acted through Oswald. This is the perpetual film-noir moral lesson: that the American hierarchy is far more unstable than it seems, and that the small-time crook in his garret and the big-time social leader in his mansion are intimately linked. When Kennedy died, and the mystery of his murder began, we took for granted that the patrician in tails with the perfect family and the sordid Oswald belonged to different worlds, just as Ruby’s Carousel Club and the White House seemed light-years apart. When Kennedy was shot, the dignified hierarchy seemed plausible. Afterward, it no longer did. What turned inside out, after his death, was that reality: the inner surface and the outer show, like a magician’s bag, were revealed to be interchangeable. That’s why the death of J.F.K., even as it fades into history, remains so close, close as can be, and closer than that. Ω

[In 1986, Adam Gopnik began his long professional association with The New Yorker with a piece that would show his future range, a consideration of connections among baseball, childhood, and Renaissance art. He has written for four editors at the magazine: William Shawn, Robert Gottlieb, Tina Brown, and David Remnick. Gopnik, born in Philadelphia, lived his early life in Montreal and received a BA from McGill University. Later, he studied at the New York University Institute of Fine Arts.  In 2011, Adam Gopnik was chosen as the noted speaker for the 50th anniversary of the Canadian Massey Lectures where he delivered five lectures across five Canadian cities that make up his book Winter: Five Windows on the Season (2011). More recently, Gopnik has written The Table Comes First: Family, France, and the Meaning of Food (2012).]

Copyright © 2013 Condé Nast Digital



Creative Commons License
Sapper's (Fair & Balanced) Rants & Raves by Neil Sapper is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License. Based on a work at sapper.blogspot.com. Permissions beyond the scope of this license may be available here.



Copyright © 2013 Sapper's (Fair & Balanced) Rants & Raves

Monday, October 28, 2013

Today — Ladies & Germs — This Blog Offers A Softporn 'Toon For Your Viewing Pleasure

This blog's resident 'toonist, Tom Tomorrow, provides the wildest, guided tour of sexual innuendo this side of Amsterdam. In today's 'toon, The resident 'toonist skewers the usual cast of wackos: a Nurse in an Obamacare website error page, a pair of sexy NSA Spies, a human Drone, a sexy Moron (oxymoron?) in Patriot garb, and a sexy female Cab Driver making suggestive conversation with her male passenger (wearing only a tie and a thong like A. Weiner). If this is a (fair & balanced) offer of a trick or a treat, so be it.

[x This Modern World]
Oversexed Halloween [Rated R]
By Tom Tomorrow (Dan Perkins)

Tom Tomorrow/Dan Perkins

[Dan Perkins is an editorial cartoonist better known by the pen name "Tom Tomorrow". His weekly comic strip, "This Modern World," which comments on current events from a strong liberal perspective, appears regularly in approximately 150 papers across the U.S., as well as on Daily Kos. The strip debuted in 1990 in SF Weekly. Perkins, a long time resident of Brooklyn, New York, currently lives in Connecticut. He received the Robert F. Kennedy Award for Excellence in Journalism in both 1998 and 2002. When he is not working on projects related to his comic strip, Perkins writes a daily political weblog, also entitled "This Modern World," which he began in December 2001. Earlier this year, Dan Perkins, pen name Tom Tomorrow, was named the winner of the 2013 Herblock Prize for editorial cartooning.]

Copyright © 2013 Tom Tomorrow (Dan Perkins)



Creative Commons License
Sapper's (Fair & Balanced) Rants & Raves by Neil Sapper is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License. Based on a work at sapper.blogspot.com. Permissions beyond the scope of this license may be available here.



Copyright © 2013 Sapper's (Fair & Balanced) Rants & Raves

Sunday, October 27, 2013

Q: Why Does This Blogger Live In Texas? A: Ya Gotta Dance With The One That Brung Ya

With a H/T (Hat Tip) to a young friend in the Valley of the Sun, today's post features an econ prof in suburban Washington, DC who's written a positive essay about the Lone Star State. Not too long ago, an old college chum e-mailed this blogger and wondered why the blogger was still deep in the heart of Texas.In response, this blogger sent the old chum a link to Gary P. Nunn's "What I Like About Texas." However, the magic of webpage coding enables this blog to supply background music for today's post:

[x YouTube/SammyTexas Channel]
"What I Like About Texas" (1997)
By Gary P. Nunn and the Sons of the Bunkhouse Band
Slideshow by Jeremy/Jeremiah Oldham


You ask me what I like about Texas
I tell you it's the wide open spaces!
It's everything between the Sabine and the Rio Grande.
It's the Llano Estacado,
It's the Brazos and the Colorado;
Spirit of the people down here who share this land!
It's another burrito, it's a cold Lone Star in my hand
It's a quarter for the jukebox, boys,
Play the Sons of the mothers of the Bunkhouse Band!

You ask me what I like about Texas
It's the big timber round Nacadoches
It's driving El Camino Real into San Antone
It's the Riverwalk and Mi Tierra
Jamm'n out with Bongo Joe
It's stories of the Menger Hotel and the Alamo!
(You remember the Alamo!)

It's another burrito, it's a cold Lone Star in my hand!
It's a quarter for the jukebox, boys,
Play the Sons of the mother love'n Bunkhouse Band!

It's another burrito, it's a cold Lone Star in my hand!
It's a quarter for the jukebox, boys,
Play the Sons of the mother love'n Bunkhouse Band!

Well, you ask me what I like about Texas
It's Bluebonnets and Indian Paint Brushes
Swimming in the sacred waters of Barton Springs
It's body surfing in the Frio
It's Saturday night in Del Rio!
It's crossing over the border for some cultural exchange!

It's another burrito, it's a cold Lone Star in my hand!
It's a quarter for the jukebox, boys,
Play the Sons of the mother love'n Bunkhouse Band!

Well, you ask me what I like about Texas
Well, I could tell you, but we'd be here all night long ♥

[x Wikipedia]

Gary P. Nunn is a Texas singer/songwriter. He was born in Brownfield, Texas, and was a member of Lubbock, Texas rock band The Sparkles during the 1960s. In 1995, Nunn was inducted into the West Texas Walk of Fame, and in 2004, into the Texas Hall of Fame.

In 1968 he was a pharmacy major at the University of Texas at Austin. By the 1970s, Nunn was backing Jerry Jeff Walker with the Lost Gonzo Band, which parted ways with Walker in 1977, with Nunn later moving on to a solo career. "Austin City Limits" on PBS has made the songwriter's "London Homesick Blues" its theme for more than two decades. The refrain "Home with the Armadillo" may have referred to the Armadillo World Headquarters in Austin's heyday as a countercultural center in the 1970s.]

Copyright © 1997 Gary P. Nunn

If this is a (fair & balanced) virtual two-step across Texas, so be it.

[x Time]
Why Texas Is Our Future
By Tyler Cowen

Tag Cloud of the following piece of writing

created at TagCrowd.com

They say the Lone Star State has four seasons: drought, flood, blizzard and twister. This summer 97% of the state was in a persistent drought; in 2011 the Dallas-Fort Worth area experienced 40 straight days in July and August of temperatures of 100° or higher. The state's social services are thin. Welfare benefits are skimpy. Roughly a quarter of residents have no health insurance. Many of its schools are less than stellar. Property-crime rates are high. Rates of murder and other violent crimes are hardly sterling either. A recent report from the FBI found that the home state of Chuck Norris led the nation as the place the most people got punched or kicked to death in 2012.

So why are more Americans moving to Texas than to any other state?

Texas has acquired a certain cool factor recently. The pundit Marshall Wittmann has called it "America's America," the place where Americans go when they need a fresh start. The state's ethnic and cultural diversity has made places like Austin and Marfa into magnets for artists and other bohemians.

But I believe the real reason Americans are headed to Texas is much simpler. As an economist and a libertarian, I have become convinced that whether they know it or not, these migrants are being pushed (and pulled) by the major economic forces that are reshaping the American economy as a whole: the hollowing out of the middle class, the increased costs of living in the U.S.'s established population centers and the resulting search by many Americans for a radically cheaper way to live and do business.

One of these pioneers is Casey Colando. When he was just 19, he bought—sight unseen—five acres of Big Bend mountain desert country in Texas as an investment. It was just $300 an acre, far away both culturally and geographically from his native upstate New York. Four years later, in 2008, Colando moved to his homestead in the magnificent but remote region of West Texas.

A graduate of the State University of New York at Canton, where he studied alternative energy, Colando now lives with his wife Sara some 80 miles from the nearest town (Alpine, pop. 6,000). The couple bought more land adjoining their original property, and they run an alternative-energy business that serves various settlers who have moved to this isolated corner of Texas—helping their neighbors eschew what Colando calls "the big electric company" and live off the grid by installing solar and wind power.

Colando says he first tried to launch his alternative-energy business in upstate New York. "It was difficult work for a small business there," he says. "The costs were higher, and there were fewer business opportunities, more regulations. So I came out West, and I haven't looked back."

To a lot of Americans, Texas feels like the future. And I would argue that more than any other state, Texas looks like the future as well—offering us a glimpse of what's to come for the country at large in the decades ahead. The U.S. is experiencing ever greater economic inequality and the thinning of its middle class; Texas is already one of our most unequal states. America's safety net is fraying under the weight of ballooning Social Security and Medicare costs; Texas' safety net was built frayed. Americans are seeking a cheaper cost of living and a less regulated climate in which to do business; Texas has those in spades. And did we mention there's no state income tax? (Texas is one of only seven states in the union that lack the levy.)

There's a bumper sticker sometimes seen around the state that proclaims, i wasn't born in Texas, but i got here as fast as i could. As the U.S. heads toward Texas, literally and metaphorically, it's worth understanding why we're headed there—both to see the pitfalls ahead and to catch a glimpse of the opportunities that await us if we make the journey in an intelligent fashion.

AVERAGE IS OVER

The first thing to understand about our more Texan future is what's happening to the American workforce on the whole: average is over.

More and more workers are leaving the middle class—headed both up and down—and fewer workers are moving into it. Median household income has fallen about 5% since the Great Recession ended in 2009; in that same period, 58% of job growth was in lower-wage occupations, defined as those paying $13.83 an hour or less.

However, it's not that incomes are stagnant generally. Earners at the top have done very well—but the gains have been distributed quite unevenly. Last year the top 1% of earners took home 19.3% of household income, their largest share since 1928. The top 10% of earners didn't do so badly either, taking home a record 48.2% of household income.

We know the forces driving this: globalization, advances in computing, and automation mean that Americans are facing tougher competition than ever before from workers overseas, machines and smart software. The individuals moving up the economic ladder are the ones who've responded to this competition by upgrading their skills and efforts. The ones moving down are largely those who have failed or been unable to respond at all.

The group struggling the most is the young. People with four-year college degrees earn less today than graduates did in 2000, and over time this will translate into persistently lower earnings. And too many young people today, even if they have jobs, have failed to establish themselves on career ladders. If we look at Americans ages 16 to 24 who are not enrolled in school, only 36% are working full time, 10% less than in 2007. A 24-year-old who is working part time for a website, as a Pilates instructor or in retail may be having fun, but he or she probably won't be receiving strong promotions a couple of decades down the line.

Meanwhile, the cost of hanging on to a middle-class lifestyle is increasing. As a 2010 report by the Department of Commerce found, looking at economic data from the past two decades, "The prices for three large components of middle-class expenses have increased faster than income: the cost of college, the cost of health care and the cost of a house."

Texas isn't immune to any of this, of course. But it just may be the friendliest state for those who worry about their prospects in this new normal. For starters, the job scene is markedly better (more on that in a moment). And more crucially, it's cheaper to live in Texas and cheaper to thrive there too. Don't underestimate the power of that lower cost of living, for it can be the difference between a trailer and an apartment—between an apartment and a home.

"... AND I WOULD GO TO TEXAS"

As Davy [David in Texas] Crockett said in 1835, as his political fortunes ran out in Tennessee, "They might all go to hell, and I would go to Texas." The phrase Gone to Texas (sometimes abbreviated GTT) was the expression once used by Americans fleeing to the Lone Star State to escape debt or the law—posted as a sign on a fence or scratched into the door of an abandoned home.

While today's migrants aren't the vagabonds and outlaws of the 19th century, people are still "gone to Texas." Texas is America's fastest-growing large state, with three of the top five fastest-growing cities in the country, according to Forbes: Austin, Dallas and Houston. In 2012 alone, total migration to Texas from the other 49 states in the union was 106,000, according to the U.S. Census Bureau. Since 2000, 1 million more people have moved to Texas from other states than have left.

To get a sense of who these migrants are, consider Tara Connolly. In 2005 the New York City native was sharing a 500-sq.-ft. apartment with her then boyfriend in Cobble Hill, Brooklyn—a gentrified neighborhood where studio apartments rent for about $2,000 a month and sell for about half a million dollars. Feeling stressed, restless and in need of a change, she read an article about Austin and decided to pack up and move, with little more than the hope of finding a job in her field, graphic design.

Eight years later, Connolly is in her mid-30s and works at a hip marketing company in Austin, and she's the owner of a vintage midcentury home twice the size of her old New York City apartment. It comes with a mortgage payment half the size of her big-city rent. "Buying a house was not something I was thinking about when I came to Austin," Connolly says. "But here you have people in their 20s buying houses."

When Connolly announced that she was moving to Austin, she was met with looks of alarm from her Bronx-born family. But she says that after visiting her and seeing her new home, her family has changed its tune. "They say they can't believe how green it is," she says. "They thought it was all tumbleweed."

Connolly's story is hardly unique. And the general pattern is by no means a new one, according to Bernard Weinstein, an economist and associate director of Southern Methodist University's Maguire Energy Institute. Weinstein has been observing the Texas economy for more than 30 years and says that "whenever the economy is bad in the rest of the country, that pushes people to the Sun Belt." Along with the affordable housing and a warm climate, newcomers are drawn by the notion that in the case of Texas, jobs are plentiful. Texas' unemployment rate is currently 6.4%—high for Texas but below the national rate of 7.3%.

And as Connolly's story shows, these pilgrims aren't coming just from places like Michigan, where a major industry has collapsed, but also from more prosperous states like New York and California. Over the past 20 years, more than 4 million Californians have moved out of California, according to Weinstein. "That's two cities the size of Houston," he notes.

Jed Kolko, chief economist for San Francisco—based real estate website Trulia, says that from 2005 to 2011, 183 Californians moved to Texas for every 100 Texans who moved to California. "Home prices, more than any other factor, cause people to leave," Kolko says.

Why is California, for instance, so expensive and Texas so cheap? "God wanted California to be expensive," Kolko says, with its ideal climate and attractive but limited real estate squeezed between the mountains and the ocean. The demand for a piece of the California dream was destined to be expensive, and lawmakers passed strict building codes to add to the bottom line.

Texans might argue that they have some beautiful real estate too, but in the wide-open spaces surrounding the state's major urban areas, there is no ocean to constrict growth, and there are far fewer stringent rules. There are no zoning laws in many unincorporated areas beyond the booming urban centers, where Texas has lots of land.

The lower house prices, along with a generally low cost of living—helped along by cheap labor, cheap produce and cheap gas (currently about $3 a gallon)—really matter when it comes to quality of life. For instance, the federal government calculated the Texas poverty rate as 18.4% for 2010 and that of California as about 16%. That may sound bad for Texas, but once adjustments are made for the different costs of living across the two states, as the federal government does in its Supplemental Poverty Measure, Texas' poverty rate drops to 16.5% and California's spikes to a dismal 22.4%. Not surprisingly, it is the lower-income residents who are most likely to leave California.

On the flip side, Texas has a higher per capita income than California, adjusted for cost of living, and nearly catches up with New York by the same measure. Once you factor in state and local taxes, Texas pulls ahead of New York—by a wide margin. The website MoneyRates ranks states on the basis of average income, adjusting for tax rates and cost of living; once those factors are accounted for, Texas has the third highest average income (after Virginia and Washington State), while New York ranks 36th.

THE TEXAS MODEL

Of course, it's not just cheap living that draws people to Texas. It's also jobs. In the past 12 months, Texas has added 274,700 new jobs—that's 12% of all jobs added nationwide and 51,000 more than California added. In a Moody's Analytics study, seven of the top 10 cities for projected job growth through 2015 will be in Texas. Four Texas cities topped the list: Austin, McAllen (in the Rio Grande Valley), Houston and Fort Worth. "For the past 22 years, Texas has outgrown the country by a factor of more than 2 to 1," Dallas Federal Reserve president Richard Fisher tells TIME, echoing an April speech in which he laid out the story of Texas growth at some length.

"My uninformed friends usually say, 'But Texas creates low-paying jobs.' To that I respond, You are right. We create more low-paying jobs in Texas than anybody else," Fisher says. "But we also created far more high-paying jobs." In fact, from 2002 to 2011, with 8% of the U.S. population, Texas created nearly one-third of the country's highest-paying jobs.

"Most importantly," Fisher says, "while the United States has seen job destruction in the two middle-income quartiles, Texas has created jobs for those vital middle-income workers too." From 2001 to 2012, the number of lower-middle-income jobs in Texas grew by 14.4%, and the number of upper-middle-income jobs grew by 24.2%. If you look at the U.S. without Texas over the same period, the number of lower-middle jobs grew by an anemic 0.1%, and the number of upper-middle jobs shrank by 6%.

"The bottom line," says Fisher, is that "we have experienced growth across all sectors and in all income categories ... If you pull Texas out of the puzzle of the United States, the rest of the country falls down!"

How did Texas do it?

Texas Monthly senior editor Erica Grieder credits the "Texas model" in her recent book, Big, Hot, Cheap, and Right: What America Can Learn From the Strange Genius of Texas (2013). "The Texas model basically calls for low taxes and low services," she says. "In a sense, it's just a limited-government approach." Chief Executive magazine has named Texas the most growth-friendly state in the nation for nine years in a row. The ranking is based on survey results from its CEO readership, who grade the states on the basis of factors such as taxes and regulation, the quality of the workforce and the living environment. Cheap land, cheap labor and low taxes have all clearly contributed to this business-friendly climate. But that's not the whole story.

"Certainly since 2008, the beginning of the Great Recession, it's been the energy boom," SMU's Weinstein says, pointing to the resource boom's ripple effect throughout the Texas economy. However, he says, the job growth predates the energy boom by a significant margin. "A decade ago, before the shale boom, economic growth in Texas was based on IT development," Weinstein says. "Today most of the job creation, in total numbers, is in business and personal services, from people working in hospitals to lawyers."

Of course, not everyone's a fan of the Texas model. "We are not strong economically because we have low taxes and lax regulation. We are strong economically because of geography and geology," says Scott McCown, a former executive director of the Center for Public Policy Priorities who is now a law professor at the University of Texas. "We've built an economy favoring the wealthy.... If that's the ultimate end result of the Texas model in a democratic society, it will be rejected."

So will the rest of the country follow Texas' lead? People are already voting with their feet. The places in the U.S. seeing significant in-migration are largely in relatively inexpensive parts of the Sun Belt. These are, by and large, affordable states with decent records of job creation—often with subpar public services and low taxes. Texas is just the most striking example. But Oklahoma, Colorado, the Carolinas and other parts of the South are benefiting from the same trends—namely that California, New York and the other high-tax, high-cost states are no longer such good deals for much of the U.S.'s middle and lower-middle classes.

The Americans heading to Texas and other cheap-living states are a bit like the mythical cowboys of our past—self-reliant, for better or worse.

THE NEW COWBOYS

For Americans heading to these places, the likelihood is that they'll be facing slow-growing, stagnant or even falling wages. Yet it won't be the dystopia that it may sound like at first. Automation and globalization don't just make a lot of goods and services much cheaper—they sometimes make them free. There is already plenty of free online education, graded by computer bots, and free music on YouTube. Hulu and related online viewing services are allowing Americans to free up some money by cutting the cable cord. Facebook soaks up a lot of our free time, and it doesn't cost a dime. The near future likely will bring free or very cheap online medical diagnosis.

This suggests that wages and GDP statistics may no longer be the most accurate gauges of real living standards. A new class of Americans will become far more numerous. They will despair at finding good middle-class jobs and decide to live off salaries that are roughly comparable to today's lower-middle-class incomes. Some will give up trying so hard—but it won't matter as much as it used to, because they won't have to be big successes to live relatively well.

"The world of work is changing, and what we are learning is it's no longer about the 9-to-5, it's about the work itself," says Gary Swart, CEO of oDesk, a global job marketplace that sells tools to allow businesses to hire and manage remote workers. "Millennials, they are about how to make an impact.... They want freedom in their lives, and they care more about that than they do the financial rewards."

For an example of one of these "new cowboys," take Joe Swec. For most of his life, Swec, 32, has lived in beautiful (and, he notes, expensive) places. Born in the San Francisco Bay Area, he graduated from California Polytechnic State University with a degree in structural engineering and went to work in Healdsburg, working on the construction and restoration of several Sonoma County wineries. Then he headed south to work in Malibu. But he was not content.

"I wanted a career change," he says. "I wanted to do something more creative, and I would fantasize about being an artist."

So five years ago, Swec moved to Austin. "My friends thought I was crazy—why would I move to Texas?" he says. "They also wondered why I would leave a six-figure job. I saw it differently. I wanted my job to give me a happy life."

After moving, Swec first worked as a bartender, then as a waiter. Then he got a job doing silk screens for a design company. Inspiration came along when he came across papers his grandfather had collected—scrapbooks filled with calligraphy and hand lettering. He found he had an affinity for the art of lettering, and as he worked on an outdoor mural, he wondered why he didn't do this for a living. So he took up a career as a sign painter.

His hand-lettered signs now appear on the walls and doorways of some of Austin's newest, liveliest restaurants and pubs. "My friends out in California don't understand why I like it here," Swec says. "But I have just developed a fondness for the local way of life."

In the coming decades, some people may even go to extremes in low-cost living, like making their home in micro-houses (of, say, about 400 sq. ft. and costing $20,000 to $40,000) or going off the grid entirely. Brad Kittel, owner of Tiny Texas Houses, blogs about his small homes built from salvaged materials at tinytexashouses.com. His business, based in the small rural community of Luling, east of San Antonio, offers custom homes, plans, and lessons on how to be a salvage miner. So far he has built about 75 tiny homes, and he has plans for a tiny-home community built around a sort of central lodge house. Kittel, 57, is a former Austin developer who pioneered the gentrification of a crumbling East Austin neighborhood in the late 1980s. These days most of his buyers are baby boomers. "Downsizing was just a whisper. Now it's turning into a mantra," Kittel says. "My generation, we were accumulators—big houses, big cars. But now we have no big resources."

The micro-home trend is being watched by traditional homebuilders as well. Texas-based developer D.R. Horton, a member of the New York Stock Exchange and one of the largest homebuilders in the country, built 29 micro-homes sized from 364 to 687 sq. ft. in Portland, OR, last year for an average price of $120,000 to $180,000—admittedly far from the company's headquarters in spacious Fort Worth.

In some ways, the new settlements of a Texas-like America could come to resemble trailer parks—culturally rich trailer parks, so to speak. The next Brooklyn may end up somewhere in the Dakotas. Fargo, anyone?

Nonetheless, America, a historically flexible nation in cultural and economic terms, will adjust. One of our saving graces may end up being just how wasteful we've been in the past. It will be possible for many consumers to cut back significantly on spending without losing too much in terms of material well-being and happiness.

The new frugality born of the Great Recession is unlikely to give way to the old conspicuous consumption anytime soon, if consumer studies are to be believed. Nick Hodson, a partner and member of the consumer and retail practice at Booz & Co., points to his company's 2012 study of 2,000 grocery shoppers across the country. The study found that "value-seeking behavior" was here to stay.

"The recession caused about 20 to 30% more shoppers to adopt these behaviors as they adjusted to straitened personal circumstances or simply followed a set of perceived 'acceptable' frugal behaviors," the study concluded. "Today, 75 to 90% of consumers are exhibiting these frugal shopping behaviors. What's important is that a majority—perhaps two-thirds—of the newly frugal shoppers report that they will not revert to their previous behaviors as the recession ends."

THE TRAIL AHEAD

There are, of course, major downsides to the future I'm describing here. A lot of health care will become more expensive and harder to access. Many Americans will have to downsize their living quarters involuntarily. People in the shrinking middle class who want to have more than one child may find the costs too high. There is no longer the expectation, much less the guarantee, that living standards double or even increase much with each generation.

But it's not all bad news—especially if we take the right steps to prepare. The flood of Americans moving to Texas shows us where we need to focus our attention; what these migrants have found in Texas shows us ways many of our cities and states can improve.

Most critically, across the country, our K-12 education system needs to be much more rigorous, so that more Americans will be prepared to succeed in the new high-tech era to come. Right now, labor markets and jobs are changing faster than schools, and that means graduates are being left behind. Education at all levels needs to be cheaper and easier to access—and family support for students needs to be much stronger as well.

There are also many small but important ways in which states and cities can adjust in order to incorporate some of the lessons Texas has to teach.

For instance, states could deregulate building so that rents and home prices could be much lower. Housing is one of the biggest costs in most people's budgets, and it will be difficult to bring those costs down without greater competition and significantly higher urban density. In other words: San Francisco needs to become more like Houston when it comes to zoning.

Likewise, it would be a tremendous boon for low-skilled workers if we scaled back much of the occupational licensing that exists at the state and local levels. There's no reason a worker should need legal permission to become, say, a barber or a cosmetologist, as is currently the case in many states. Is there any good reason that Nevada, Louisiana, Florida and the District of Columbia should require interior designers to take 2,190 hours of training and pass an exam before having the legal right to practice? By relaxing these and many other requirements, we could create a lot more decent jobs and lower prices for consumers at the same time.

A little more freedom in strategically targeted areas—that is, a little more Texas—could go a long way.

Don't be scared. As Tara Connolly found, Texas is a welcoming place: "Everyone is just so friendly, and they look you in the eye." And she wouldn't even think of going back to New York City. "The constant stress doesn't seem appealing," she says. "The cost was insane, and it was time to start fresh. This was a good place to try." Ω

[Tyler Cowen holds the Holbert C. Harris Chair of economics as a professor at George Mason University and is co-author, with Alex Tabarrok, of the popular economics blog Marginal Revolution. He currently writes the "Economic Scene" column for the New York Times and writes for such magazines as The New Republic and The Wilson Quarterly. Cowen is also general director of the Mercatus Center at George Mason University (funded by the Koch Family Charitable Foundations). Cowen graduated from George Mason University with a B.S. in economics and received his Ph.D. in economics from Harvard University.]

Copyright © 2013 Time Inc.



Creative Commons License
Sapper's (Fair & Balanced) Rants & Raves by Neil Sapper is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License. Based on a work at sapper.blogspot.com. Permissions beyond the scope of this license may be available here.



Copyright © 2013 Sapper's (Fair & Balanced) Rants & Raves