Thursday, March 23, 2017

Today, Chuck B. Goode

Sometime in the murky past, this blogger was asked, "When you hear a song, how do you know — instantaneously — that it's good and you want to hear it again?" This blogger experienced such a moment as the film, "The Blackboard Jungle" (1955), opened with Bill Haley singing "One, two, three o'clock, four o'clock, rock...." Those lyrics were like a knife in the blogger's adolescent brain. Nearly a year before the death of Chuck Berry on March 18, 2017, music critic Chuck Klosterman proclaimed a personalized version "Hail, Hail, Rock'n Roll." If this is a (fair & balanced) appreciation of greatness, so be it.


[x NY Fishwrap 'Zine]
Which Rock Star Will Historians Of The Future Remember? (May 23, 2016)
By Chuck Klosterman


TagCrowd cloud of the following piece of writing

created at TagCrowd.com

Classifying anyone as the “most successful” at anything tends to reflect more on the source than the subject. So keep that in mind when I make the following statement: John Philip Sousa is the most successful American musician of all time.

Marching music is a maddeningly durable genre, recognizable to pretty much everyone who has lived in the United States for any period. It works as a sonic shorthand for any filmmaker hoping to evoke the late 19th century and serves as the auditory backdrop for national holidays, the circus and college football. It’s not “popular” music, but it’s entrenched within the popular experience. It will be no less fashionable tomorrow than it is today.

And this entire musical idiom is now encapsulated in one person: John Philip Sousa. Even the most cursory two-sentence description of marching music inevitably cites him by name. I have no data on this, but I would assert that if we were to ask the entire population of the United States to name every composer of marching music they could think of, 98 percent of the populace would name either one person (Sousa) or no one at all. There’s just no separation between the awareness of this person and the awareness of this music, and it’s hard to believe that will ever change.

Now, the reason this happened — or at least the explanation we’ve decided to accept — is that Sousa was simply the best at this art. He composed 136 marches over a span of six decades and is regularly described as the most famous musician of his era. The story of his life and career has been shoehorned into the US education curriculum at a fundamental level. (I first learned of Sousa in fourth grade, a year before we memorized the state capitals.) And this, it seems, is how mainstream musical memory works. As the timeline moves forward, tangential artists in any field fade from the collective radar, until only one person remains; the significance of that individual is then exaggerated, until the genre and the person become interchangeable. Sometimes this is easy to predict: I have zero doubt that the worldwide memory of Bob Marley will eventually have the same tenacity and familiarity as the worldwide memory of reggae itself.

But envisioning this process with rock music is harder. Almost anything can be labeled “rock”: Metallica, ABBA, Mannheim Steamroller, a haircut, a muffler. If you’re a successful tax lawyer who owns a hot tub, clients will refer to you as a “rock-star CPA” when describing your business to less-hip neighbors. The defining music of the first half of the 20th century was jazz; the defining music of the second half of the 20th century was rock, but with an ideology and saturation far more pervasive. Only television surpasses its influence.

And pretty much from the moment it came into being, people who liked rock insisted it was dying. The critic Richard Meltzer supposedly claimed that rock was already dead in 1968. And he was wrong to the same degree that he was right. Meltzer’s wrongness is obvious and does not require explanation, unless you honestly think “Purple Rain” is awful. But his rightness is more complicated: Rock is dead, in the sense that its “aliveness” is a subjective assertion based on whatever criteria the listener happens to care about.

This is why the essential significance of rock remains a plausible thing to debate, as does the relative value of major figures within that system (the Doors, R.E.M., Radiohead). It still projects the illusion of a universe containing multitudes. But it won’t seem that way in 300 years.

The symbolic value of rock is conflict-based: It emerged as a byproduct of the post-World War II invention of the teenager, soundtracking a 25-year period when the gap between generations was utterly real and uncommonly vast. That dissonance gave rock music a distinctive, nonmusical importance for a long time. But that period is over. Rock — or at least the anthemic, metaphoric, Hard Rock Cafe version of big rock — has become more socially accessible but less socially essential, synchronously shackled by its own formal limitations. Its cultural recession is intertwined with its cultural absorption. As a result, what we’re left with is a youth-oriented music genre that a) isn’t symbolically important; b) lacks creative potential; and c) has no specific tie to young people. It has completed its historical trajectory. Which means, eventually, it will exist primarily as an academic pursuit. It will exist as something people have to be taught to feel and understand.

I imagine a college classroom in 300 years, in which a hip instructor is leading a tutorial filled with students. These students relate to rock music with no more fluency than they do the music of Mesopotamia: It’s a style they’ve learned to recognize, but just barely (and only because they’ve taken this specific class). Nobody in the room can name more than two rock songs, except the professor. He explains the sonic structure of rock, its origins, the way it served as cultural currency and how it shaped and defined three generations of a global superpower. He shows the class a photo, or perhaps a hologram, of an artist who has been intentionally selected to epitomize the entire concept. For these future students, that singular image defines what rock was.

So what’s the image?

Certainly, there’s one response to this hypothetical that feels immediate and sensible: the Beatles. All logic points to their dominance. They were the most popular band in the world during the period they were active and are only slightly less popular now, five decades later. The Beatles defined the concept of what a “rock group” was supposed to be, and all subsequent rock groups are (consciously or unconsciously) modeled upon the template they naturally embodied. Their 1964 appearance on “The Ed Sullivan Show” is so regularly cited as the genesis for other bands that they arguably invented the culture of the 1970s, a decade when they were no longer together. The Beatles arguably invented everything, including the very notion of a band’s breaking up. There are still things about the Beatles that can’t be explained, almost to the point of the supernatural: the way their music resonates with toddlers, for example, or the way it resonated with Charles Manson. It’s impossible to imagine another rock group where half its members faced unrelated assassination attempts. In any reasonable world, the Beatles are the answer to the question “Who will be the Sousa of rock?”

But our world is not reasonable. And the way this question will be asked tomorrow is (probably) not the same way we would ask it today.

In Western culture, virtually everything is understood through the process of storytelling, often to the detriment of reality. When we recount history, we tend to use the life experience of one person — the “journey” of a particular “hero,” in the lingo of the mythologist Joseph Campbell — as a prism for understanding everything else. That inclination works to the Beatles’ communal detriment. But it buoys two other figures: Elvis Presley and Bob Dylan. The Beatles are the most meaningful group, but Elvis and Dylan are the towering individuals, so eminent that I wouldn’t necessarily need to use Elvis’s last name or Dylan’s first.

Still, neither is an ideal manifestation of rock as a concept.

It has been said that Presley invented rock and roll, but he actually staged a form of primordial “prerock” that barely resembles the post-“Rubber Soul” aesthetics that came to define what this music is. He also exited rock culture relatively early; he was pretty much out of the game by 1973. Conversely, Dylan’s career spans the entirety of rock. Yet he never made an album that “rocked” in any conventional way (the live album “Hard Rain” probably comes closest). Still, these people are rock people. Both are integral to the core of the enterprise and influenced everything we have come to understand about the form (including the Beatles themselves, a group that would not have existed without Elvis and would not have pursued introspection without Dylan).

In 300 years, the idea of “rock music” being represented by a two‑pronged combination of Elvis and Dylan would be equitable and oddly accurate. But the passage of time makes this progressively more difficult. It’s always easier for a culture to retain one story instead of two, and the stories of Presley and Dylan barely intersect (they supposedly met only once, in a Las Vegas hotel room). As I write this sentence, the social stature of Elvis and Dylan feels similar, perhaps even identical. But it’s entirely possible one of them will be dropped as time plods forward. And if that happens, the consequence will be huge. If we concede that the “hero’s journey” is the de facto story through which we understand history, the differences between these two heroes would profoundly alter the description of what rock music supposedly was.

If Elvis (minus Dylan) is the definition of rock, then rock is remembered as showbiz. Like Frank Sinatra, Elvis did not write songs; he interpreted songs that were written by other people (and like Sinatra, he did this brilliantly). But removing the centrality of songwriting from the rock equation radically alters it. Rock becomes a performative art form, where the meaning of a song matters less than the person singing it. It becomes personality music, and the dominant qualities of Presley’s persona — his sexuality, his masculinity, his larger‑than‑life charisma — become the dominant signifiers of what rock was. His physical decline and reclusive death become an allegory for the entire culture. The reminiscence of the rock genre adopts a tragic hue, punctuated by gluttony, drugs and the conscious theft of black culture by white opportunists.

But if Dylan (minus Elvis) becomes the definition of rock, everything reverses. In this contingency, lyrical authenticity becomes everything; rock is somehow calcified as an intellectual craft, interlocked with the folk tradition. It would be remembered as far more political than it actually was, and significantly more political than Dylan himself. The fact that Dylan does not have a conventionally “good” singing voice becomes retrospective proof that rock audiences prioritized substance over style, and the portrait of his seven‑decade voyage would align with the most romantic version of how an eclectic collection of autonomous states eventually became a place called “America.”

These are the two best versions of this potential process. And both are flawed.

There is, of course, another way to consider how these things might unspool, and it might be closer to the way histories are actually built. I’m creating a binary reality where Elvis and Dylan start the race to posterity as equals, only to have one runner fall and disappear. The one who remains “wins” by default (and maybe that happens). But it might work in reverse. A more plausible situation is that future people will haphazardly decide how they want to remember rock, and whatever they decide will dictate who is declared its architect. If the constructed memory is a caricature of big‑hair arena rock, the answer is probably Elvis; if it’s a buoyant, unrealistic apparition of punk hagiography, the answer is probably Dylan. But both conclusions direct us back to the same recalcitrant question: What makes us remember the things we remember?

In 2014, the jazz historian Ted Gioia published a short essay about music criticism that outraged a class of perpetually outraged music critics. Gioia’s assertion was that 21st‑century music writing has devolved into a form of lifestyle journalism that willfully ignores the technical details of the music itself. Many critics took this attack personally and accused Gioia of devaluing their vocation. Which is odd, considering the colossal degree of power Gioia ascribes to record reviewers: He believes specialists are the people who galvanize history. Critics have almost no impact on what music is popular at any given time, but they’re extraordinarily well positioned to dictate what music is reintroduced after its popularity has waned.

“Over time, critics and historians will play a larger role in deciding whose fame endures,” Gioia wrote me in an email. “Commercial factors will have less impact. I don’t see why rock and pop will follow any different trajectory from jazz and blues.” He rattled off several illustrative examples: Ben Selvin outsold Louis Armstrong in the 1920s. In 1956, Nelson Riddle and Les Baxter outsold “almost every rock ’n’ roll star not named Elvis,” but they’ve been virtually erased from the public record. A year after that, the closeted gay crooner Tab Hunter was bigger than Jerry Lee Lewis and Fats Domino, “but critics and music historians hate sentimental love songs. They’ve constructed a perspective that emphasizes the rise of rock and pushes everything else into the background. Transgressive rockers, in contrast, enjoy lasting fame.” He points to a contemporary version of that phenomenon: “Right now, electronic dance music probably outsells hip‑hop. This is identical to the punk‑versus‑disco trade‑off of the 1970s. My prediction: edgy hip‑hop music will win the fame game in the long run, while EDM will be seen as another mindless dance craze.”

Gioia is touching on a variety of volatile ideas here, particularly the outsize memory of transgressive art. His example is the adversarial divide between punk and disco: In 1977, the disco soundtrack to “Saturday Night Fever” and the Sex Pistols’ “Never Mind the Bollocks, Here’s the Sex Pistols” were both released. The soundtrack to “Saturday Night Fever” has sold more than 15 million copies; it took “Never Mind the Bollocks” 15 years to go platinum. Yet virtually all pop historiographers elevate the importance of the Pistols above that of the Bee Gees. The same year the Sex Pistols finally sold the millionth copy of their debut, SPIN magazine placed them on a list of the seven greatest bands of all time. “Never Mind the Bollocks” is part of the White House record library, supposedly inserted by Amy Carter just before her dad lost to Ronald Reagan. The album’s reputation improves by simply existing: In 1985, the British publication NME classified it as the 13th‑greatest album of all time; in 1993, NME made a new list and decided it now deserved to be ranked third. This has as much to do with its transgressive identity as its musical integrity. The album is overtly transgressive (and therefore memorable), while “Saturday Night Fever” has been framed as a prefab totem of a facile culture (and thus forgettable). For more than three decades, that has been the overwhelming consensus.

But I’ve noticed — just in the last four or five years — that this consensus is shifting. Why? Because the definition of “transgressive” is shifting. It’s no longer appropriate to dismiss disco as superficial. More and more, we recognize how disco latently pushed gay, urban culture into white suburbia, which is a more meaningful transgression than going on a British TV talk show and swearing at the host. So is it possible that the punk‑disco polarity will eventually flip? Yes. It’s possible everyone could decide to reverse how we remember 1977. But there’s still another stage here, beyond that hypothetical inversion: the stage in which everybody who was around for punk and disco is dead and buried, and no one is left to contradict how that moment felt. When that happens, the debate over transgressions freezes and all that is left is the music. Which means the Sex Pistols could win again or maybe they lose bigger, depending on the judge.

“There is a justice-driven part of my brain that believes — or needs to believe — that the cream rises to the top, and the best work endures by virtue of its goodness,” argues the music writer Amanda Petrusich, author of Do Not Sell at Any Price: The Wild, Obsessive Hunt for the World's Rarest 78rpm Records (2014), a dive into the obsessive world of 78 rpm record collectors. “That music becomes emblematic because it’s the most effective. When I think of rock and who might survive, I immediately think of the Rolling Stones. They’re a band that sounds like what we’ve all decided rock ’n’ roll should sound like: loose and wild. Their story reflects that ethos and sound: loose and wild. And also, they’re good.”

This is true. The Rolling Stones are good, even when they release records like “Bridges to Babylon.” They’ve outlived every band that ever competed against them, with career album sales exceeding the present population of Brazil. From a credibility standpoint, the Rolling Stones are beyond reproach, regardless of how they choose to promote themselves: They’ve performed at the Super Bowl, in a Kellogg’s commercial and on an episode of “Beverly Hills, 90210.” The name of the biggest magazine covering rock music was partly inspired by their sheer existence. The group members have faced arrest on multiple continents, headlined the most disastrous concert in California history and classified themselves (with surprisingly little argument) as “the greatest rock and roll band in the world” since 1969. Working from the premise that the collective memory of rock should dovetail with the artist who most accurately represents what rock music actually was, the Rolling Stones are a strong answer.

But not the final answer.

NASA sent the unmanned craft Voyager I into deep space in 1977. It’s still out there, forever fleeing Earth’s pull. No man‑made object has ever traveled farther; it crossed the orbit of Pluto in 1989 and currently tumbles through the interstellar wasteland. The hope was that this vessel would eventually be discovered by intelligent extraterrestrials, so NASA included a compilation album made of gold, along with a rudimentary sketch of how to play it with a stylus. A team led by Carl Sagan curated the album’s contents. The record, if played by the aliens, is supposed to reflect the diversity and brilliance of earthling life. This, obviously, presupposes a lot of insane hopes: that the craft will somehow be found, that the craft will somehow be intact, that the aliens who find it will be vaguely human, that these vaguely human aliens will absorb stimuli both visually and sonically and that these aliens will not still be listening to eight‑tracks.

But it did guarantee that one rock song will exist even if the earth is spontaneously swallowed by the sun: “Johnny B. Goode,” by Chuck Berry. The song was championed by Ann Druyan (who later become Sagan’s wife) and Timothy Ferris, a science writer and friend of Sagan’s who contributed to Rolling Stone magazine. According to Ferris, who was the album’s de facto producer, the folklorist Alan Lomax was against the selection of Berry, based on the argument that rock music was too childish to represent the highest achievements of the planet. (I’m assuming Lomax wasn’t too heavily engaged with the debate over the Sex Pistols and “Saturday Night Fever” either.) “Johnny B. Goode” is the only rock song on the Voyager disc, although a few other tunes were considered. “Here Comes the Sun” was a candidate, and all four Beatles wanted it to be included, but none of them owned the song’s copyright, so it was killed for legal reasons.

The fact that this happened in 1977 was also relevant to the song’s selection. “Johnny B. Goode” was 19 years old that year, which made it seem distinguished, almost prehistoric, at the time. I suspect the main reason “Johnny B. Goode” was chosen is that it just seemed like a reasonable track to select. But it was more than reasonable. It was, either deliberately or accidentally, the best possible artist for NASA to select. Chuck Berry may very well become the artist society selects when rock music is retroactively reconsidered by the grandchildren of your grandchildren.

Let’s assume all the individual components of rock shatter and dissolve, leaving behind a hazy residue that categorizes rock ’n’ roll as a collection of memorable tropes. If this transpires, historians will reconstitute the genre like a puzzle. They will look at those tropes as a suit and try to decide who fits that suit best. And that theoretical suit was tailored for Chuck Berry’s body.

Rock music is simple, direct, rhythm‑based music. Berry made simple, direct, rhythm‑based music.

Rock music is black music mainstreamed by white musicians, particularly white musicians from England. Berry is a black man who directly influenced Keith Richards and Jimmy Page.

Rock music is preoccupied with sex. Berry was a sex addict whose only American No. 1 single was about playing with his penis.

Rock music is lawless. Berry went to prison twice before he turned 40.

Rock music is tied to myth and legend (so much so that the decline of rock’s prominence coincides with the rise of the Internet and the destruction of anecdotal storytelling). Berry is the subject of multiple urban legends, several of which might actually be true and which often seem to involve cheapness, violence and sexual defecation.

“If you tried to give rock and roll another name,” John Lennon famously said, “you might call it Chuck Berry.” That quote is as close as we come to a full‑on Sousa scenario, where the person and the thing are ideologically interchangeable. Chuck Berry’s persona is the purest distillation of what we understand rock music to be. The songs he made are essential, but secondary to who he was and why he made them. He is the idea itself. ###

[Charles J. "Chuck" Klosterman is an American author and essayist who has written books and essays focused on US popular culture. He has been a columnist for Esquire and ESPN (online) and wrote "The Ethicist" column for The New York Times Magazine. Klosterman is the author of nine books and his most recent is Chuck Klosterman X: A Highly Specific, Defiantly Incomplete History of the Early 21st Century (2017). He received a BA (English) from the University of North Dakota.]

Copyright  2017 The New York Times Company



Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License..

Copyright © 2017 Sapper's (Fair & Balanced) Rants & Raves

Wednesday, March 22, 2017

The Stupid Version Of US Constitutional History On Display

Today's essay by The Jillster (Harvard history prof Jill Lepore) awoke echos, both recent and long ago, for this blogger. Just before encountering this essay about US Constitutional history, this blogger listened to a portion of the NPR broadcast of the confirmation hearing for US Supreme Court nominee, federal judge Neil Gorsuch. Today's essay mentions The Federalist Society that harbors right-wing attorneys and jurists. In particular, the US Senate Judiciary Committee hearing eventually provided a colloquy between Gorsuch and the junior US Senator from Texas, C. (for Crackpot) Cruz. Listeners were treated to a fratboy gabfest that bordered on the homoerotic between the pair of Federalist Society true believers. So much for the present because The Jillster also invoked the contributions of Justice William J. Brennan, Jr. Appointed by Dwight D. Eisenhower to the Court in 1956, Justice Breen served until 1990 and later died in 1997. The mention of Justice Brennan brought back a grad school memory for this blogger. William J. Brennan, Jr. delivered a lecture at Texas Technique in the late 1960s and the blogger will never forget the brilliant eloquence on display. If this is a (fair & balanced) good memory in these dismal days, so be it.

[x New Yorker]
Weaponizing The Past
By The Jillster (Jill Lepore)


TagCrowd cloud of the following piece of writing

created at TagCrowd.com

On the night of April 9, 1931, James M. Kiley, thirty-nine, was shot with a .32-calibre pistol at a gas station in Somerville, Massachusetts, during a botched holdup. Kiley, the night manager, had twenty-four dollars in his pocket; the cash in the register was untouched. Herman Snyder, nineteen, was found guilty of first-degree murder and sentenced to death. “Well, that’s that,” Snyder said, when the jury delivered the verdict. But that wasn’t that. Snyder filed an appeal arguing that his constitutional rights had been violated: during his trial, when the judge, the jury, lawyers for both sides, and a court stenographer visited the gas station, the judge refused to allow Snyder to go along. Even Lizzie Borden had been offered a chance to go with the jury to the crime scene, Snyder’s lawyers pointed out, and so had Sacco and Vanzetti.

In the summer of 1933, Snyder’s lawyers went to see Louis Brandeis, the Supreme Court Justice, at his summer home, on Cape Cod; Brandeis, in an extraordinary gesture from the highest court, issued a stay of execution. The Court agreed to hear the appeal, and, in January, 1934, upheld Snyder’s conviction in a 5–4 opinion that proposed a standard for measuring the weight of tradition in fundamental-rights cases, a standard sometimes known as the history test.

Some rights, like freedom of religion, are written down, which doesn’t always make them easier to secure; and some, like the right to marry, aren’t, which doesn’t mean that they’re less fundamental. The Constitution, as originally drafted, did not include a bill of rights. At the time, a lot of people thought that listing rights was a bad idea because, in a republic, the people retain all the rights not specifically granted to the government and because anything written down is both limited and open to interpretation. “What is the liberty of the press?” Alexander Hamilton asked. “Who can give it any definition which would not leave the utmost latitude for evasion?” These were excellent questions, but Hamilton lost the argument. The Bill of Rights was ratified in 1791. Past the question of which rights there remained the question of whose rights. In 1857, in Dred Scott, the Supreme Court asked whether any “negro whose ancestors were imported into this country and sold as slaves” is “entitled to all the rights, and privileges, and immunities” guaranteed in the Constitution. Relying on “historical facts,” the Court answered no, arguing that, at the time of the framing, black people “had for more than a century before been regarded as beings of an inferior order, and altogether unfit to associate with the white race either in social or political relations, and so far inferior that they had no rights which the white man was bound to respect.” After Emancipation, the Fourteenth Amendment, ratified in 1868, cast off the shackles of history with this guarantee: “No state shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any state deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.” Then, in a series of cases in the early twentieth century, the courts began applying parts of the Bill of Rights to the states, mainly by way of the Fourteenth Amendment.

Yet how would judges decide what rights fall under the definition of due process and equal protection? There seemed to be two possibilities: precedent and reasonable judgment. In Snyder v. Massachusetts, Snyder’s attorneys argued that Snyder had a fundamental right to go on the trip to the gas station, under the due-process clause. But Justice Benjamin Cardozo, writing for the majority, said that the question turned not only on a reasonable reading of the Fourteenth Amendment or on precedent but also on whether refusing to bring a defendant with the jury to the crime scene “offends some principle of justice so rooted in the traditions and conscience of our people as to be ranked as fundamental.” He then recited instances, going back to 1747, to show that what Snyder had been denied did not meet this standard.

History, in one fashion or another, has a place in most constitutional arguments, as it does in most arguments of any kind, even those about whose turn it is to wash the dishes. Generally, appeals to tradition provide little relief for people who, historically, have been treated unfairly by the law. You can’t fight segregation, say, by an appeal to tradition; segregation was an entrenched American tradition. In 1896, Plessy v. Ferguson, essentially reprising Dred, cited the “established usages, customs, and traditions of the people” in affirming the constitutionality of Jim Crow laws. In 1954, to challenge such laws, Brown v. Board of Education disavowed historical analysis and cited, instead, social science: empirical data. Meanwhile, Snyder was chiefly cited in appeals of murder convictions involving defendants who claimed that their rights had been violated. In 1945, Justice William O. Douglas cited Snyder in a 5–4 decision reversing the conviction of a Georgia sheriff who had arrested a young black man for stealing a tire and then beaten him to death. The killing was “shocking and revolting,” Douglas wrote, but it was impossible to know whether the victim’s civil rights had been violated. In a fierce dissent, Francis Murphy argued that the reversal was absurd: “Knowledge of a comprehensive law library is unnecessary for officers of the law to know that the right to murder individuals in the course of their duties is unrecognized in this nation.”

But, in recent decades, the history test applied in cases like Snyder has quietly taken a special place; it has been used to help determine the constitutionality of everything from assisted suicide to deportation, by the unlikely route of judicial decisions about sex. History’s place in American jurisprudence took a turn in 1973, in Roe v. Wade, when the Court dusted off its incunabula and looked into what “history reveals about man’s attitudes toward the abortion procedure over the centuries,” as Justice Harry Blackmun explained. Abortion had not been a crime in Britain’s North American colonies, nor was it a crime in most parts of the United States until after the Civil War. “It perhaps is not generally appreciated that the restrictive criminal abortion laws in effect in a majority of States today are of relatively recent vintage,” Blackmun wrote. In turning back the hands of time, he didn’t stop there. “We are told that, at the time of the Persian Empire, abortifacients were known, and that criminal abortions were severely punished. We are also told, however, that abortion was practiced in Greek times as well as in the Roman Era, and that ‘it was resorted to without scruple.’ ” Roe overturned laws passed by state legislatures by appealing to ancient history. William Rehnquist, in his dissent, cited Snyder: “The fact that a majority of the States reflecting, after all, the majority sentiment in those States, have had restrictions on abortions for at least a century is a strong indication, it seems to me, that the asserted right to an abortion is not ‘so rooted in the traditions and conscience of our people as to be ranked as fundamental.’ ”

Not coincidentally, liberals began applying the history test to fundamental-rights cases at the very moment that women and minorities were entering the historical profession and writing history that liberal-minded judges might be able to cite. Conservatives, meanwhile, defined a new historical method: originalism, a method with roots in the kind of analysis made in Dred Scott. Originalism is essentially a very tightly defined history test. Snyder’s invocation of “the traditions and conscience of our people” is like a reader’s pass to the library stacks. There is virtually no end of places in the historical record to look for the traditions and conscience of our people, especially when “our people” is everyone. Originalism, a term coined in 1980, asks judges to read only the books on a single shelf in the library: the writings of delegates to the Constitutional Convention and the ratifying conventions, the Federalist Papers, and a handful of other newspapers and pamphlets published between 1787 and 1791 (and, occasionally, public records relating to debates over subsequent amendments, especially the Fourteenth). Even more narrowly, some originalists insist on consulting only documents that convey the “public understanding” of the writings of these great men. “If someone found a letter from George Washington to Martha telling her that what he meant by the power to lay taxes was not what other people meant,” Robert Bork once wrote, “that would not change our reading of the Constitution in the slightest.”

Roe, along with a series of civil-rights decisions made by the Warren Court, fuelled the growth of a conservative legal movement. The Federalist Society, founded in a number of law schools in 1982, developed an intellectual tradition, promoted scholarship, and sought to place its members on the courts. (Justices Samuel Alito and Clarence Thomas, along with Neil Gorsuch, who has been nominated to join them, are affiliated with the Federalist Society.) Within five years of its founding, the society had chapters at more than seventy law schools.

In 1985, in a speech to the Federalist Society, Ronald Reagan’s Attorney General, Edwin Meese, announced that “the Administration’s approach to constitutional interpretation” was to be “rooted in the text of the Constitution as illuminated by those who drafted, proposed, and ratified it.” He called this a “jurisprudence of original intention,” and contrasted it with the “misuse of history” by jurists who saw, in the Constitution’s “spirit,” things like “concepts of human dignity,” with which they had turned the Constitution into a “charter for judicial activism.” Meese’s statement met with a reply from Justice William Brennan, who said that anyone who had ever studied in the archives knew better than to believe that the records of the Constitutional Convention and the ratifying conventions offered so certain, exact, and singular a verdict as that which Meese expected to find there. (Obama’s Supreme Court nominee Merrick B. Garland clerked for Brennan.) Brennan called the idea that modern judges could discern the framers’ original intention “little more than arrogance cloaked as humility.”

In opposing fundamental-rights arguments, though, the Reagan-era Court used not only originalist arguments but also the history test. In June, 1986, the Court ruled, 5–4, in Bowers v. Hardwick, that the right to engage in homosexual sex was not rooted in tradition; instead, prohibitions on homosexual sex were rooted in tradition. Justice Byron White, writing for the majority, said that these prohibitions had “ancient roots.” In a concurring opinion, Justice Lewis Powell wrote, “I cannot say that conduct condemned for hundreds of years has now become a fundamental right.” Blackmun, in his dissent, argued against this use of history: “I cannot agree that either the length of time a majority has held its convictions or the passions with which it defends them can withdraw legislation from this Court’s scrutiny.”

Antonin Scalia joined the Court in the next term. And, soon afterward, in 1987, Reagan had the opportunity to appoint another Justice, and named Robert Bork. Less than an hour after the nomination was announced, Senator Edward M. Kennedy called for Democrats to resist what he described as Reagan’s attempt to “impose his reactionary vision of the Constitution on the Supreme Court and on the next generation of Americans.” Laurence Tribe, the Harvard law professor, testified in opposition to Bork’s nomination. But concerns about Bork’s vantage on history were not limited to liberal legal scholars. His most determined critics included the federal judge Richard Posner, who wrote of Bork’s views, “There are other reasons for obeying a judicial decision besides the Court’s ability to display, like the owner of a champion airedale, an impeccable pedigree for the decision, connecting it to its remote eighteenth-century ancestor.” In retrospect, the way this debate reached the public was mostly a distraction. The press generally reduced the disagreement to a stubbornly partisan battle in which conservatives and the past squared off against liberals and the future, and missed most of what was at stake: the relationship between history and the law.

Scalia was the Court’s most determined and eloquent originalist, but he also frequently invoked tradition. In 1989, writing for the majority in Michael H. v. Gerald M., a case involving the assertion of parental visitation rights, he argued that finding rights “rooted in history and tradition” required identifying the “most specific” tradition; Brennan, in his dissent, questioned Scalia’s method, writing that the opinion’s “exclusively historical analysis portends a significant and unfortunate departure from our prior cases and from sound constitutional decisionmaking.” As he had in his debate with Meese, Brennan charged Scalia with something between ignorance and duplicity. “It would be comforting to believe that a search for ‘tradition’ involves nothing more idiosyncratic or complicated than poring through dusty volumes on American history,” Brennan wrote, but history is more complicated than that, “because reasonable people can disagree about the content of particular traditions, and because they can disagree even about which traditions are relevant.” Even more fundamentally, Brennan argued that the appeal to tradition essentially nullifies the Fourteenth Amendment, whose whole point was to guarantee constitutional protections to those Americans who had not been protected by the traditions and consciences of other Americans.

If less carefully observed than the debate over originalism, the debate over the history test has influenced judicial nominations for decades. “A core question is whether, in examining this nation’s history and tradition, the Court will protect only those interests supported by a specific and longlasting tradition, or whether the Court will not so constrict its analysis,” Senator Joseph Biden said during hearings on David Souter’s nomination, in 1990. (Biden had been coached by Tribe.) Souter’s answer—“It has got to be a quest for reliable evidence, and there may be reliable evidence of great generality”—satisfied Democrats. Liberal legal scholars, meanwhile, had grown increasingly alarmed by Scalia’s use of history: in a 1990 case, for example, he cited a book written in 1482 in a narrowing definition of due process, and in a 1991 case he cited punishments imposed during the reign of James II to uphold a mandatory life sentence without the possibility of parole for the possession of six hundred and fifty grams of cocaine. The legal scholar Erwin Chemerinsky argued that conservatives on the Court had turned to history-test historicism because originalism is so patently flawed as a mode of constitutional interpretation. (The framers weren’t originalists; Brown v. Board can’t be squared with originalism; originalism can’t be reconciled with democratic self-government.) “The constant use of history to justify conservative results leads to the cynical conclusion that the country has a seventeenth century Court as it enters the twenty-first century,” Chemerinsky wrote in 1993. “It is not enough to make one want to take all the history books out of the Supreme Court’s library, but it makes one come close.”

Or you could write new history books. Geoffrey R. Stone, a distinguished professor and a former dean of the University of Chicago Law School, is a past chairman of the American Constitution Society, which was founded, in 2001, as an answer to the Federalist Society. His new book, Sex and the Constitution: Sex, Religion, and Law from America’s Origins to the Twenty-first Century (2017), locates “America’s origins” in antiquity. Applying the history test to the regulation of sex, Stone begins his inquiry in the sixth century BCE, and expands into a learned, illuminating, and analytical compendium that brings together the extraordinary research of a generation of historians in service of a constitutional call to arms.

Stone started working on the book about a decade ago, not long after the Court reversed Bowers. In Lawrence v. Texas, in 2003, the majority opinion overturned state sodomy laws by rejecting the history presented as evidence in Bowers. Colonial anti-sodomy laws did exist, Kennedy wrote in Lawrence, but they applied to everyone, not just to men; also, they were hardly ever enforced and “it was not until the 1970’s that any State singled out same-sex relations for criminal prosecution, and only nine States have done so.” In short, Kennedy wrote, “the historical grounds relied upon in Bowers are more complex than the majority opinion and the concurring opinion by Chief Justice Burger indicate.”

The tables had turned. Between Bowers and Lawrence, academic historians had produced a considerable body of scholarship about the regulation of sexuality, on which the Court was able to draw. Scalia, in an uncharacteristically incoherent dissent, mainly fumed about this, arguing that “whether homosexual sodomy was prohibited by a law targeted at same-sex sexual relations or by a more general law prohibiting both homosexual and heterosexual sodomy, the only relevant point is that it was criminalized—which suffices to establish that homosexual sodomy is not a right ‘deeply rooted in our Nation’s history and tradition.’ ” Scalia, in effect, accused the majority of doing too much historical research.

The inconsistency is perhaps best explained by the Court’s wish to pretend that it is not exercising judicial discretion. One legal scholar has suggested that the history test is like Dumbo’s feather. Dumbo can fly because he’s got big ears, but he doesn’t like having big ears, so he decides he can fly because he’s got a magic feather. The Court has got big, activist ears; it would rather believe it’s got a magical history feather.

Lately, the field of argument, if not always of battle, in many fundamental-rights cases has moved from the parchment pages of the Constitution to the clay of Mesopotamia. In Obergefell v. Hodges, the 2015 Supreme Court decision that overturned state bans on same-sex marriage, Justice Kennedy, writing for the majority, reached back almost to the earliest written records of human societies. “From their beginning to their most recent page, the annals of human history reveal the transcendent importance of marriage,” he said. “Since the dawn of history, marriage has transformed strangers into relatives, binding families and societies together.” He cited Confucius. He quoted Cicero. The states that wanted to ban same-sex marriage described its practice as a betrayal of that history, but Kennedy saw it as a continuation, a testament to “the enduring importance of marriage.” Marriage is an institution with “ancient origins,” Kennedy said, but that doesn’t mean it’s changeless. Scalia, in a heated dissent, called Kennedy’s opinion “silly” and “pretentious.” As a matter of historical analysis, Scalia mostly confined himself to the past century and a half. “When the Fourteenth Amendment was ratified in 1868, every State limited marriage to one man and one woman, and no one doubted the constitutionality of doing so,” he said. “That resolves these cases.”

Liberal legal scholars disagree, and Stone’s Sex and the Constitution is an attempt to pull together all their evidence, for the sake of court battles to come. Ancient Greeks, Romans, and Jews believed that sex was natural and didn’t have a lot of rules about it, Stone argues. Early Christians, influenced by Augustine of Hippo, who in the fifth century decided that Adam and Eve had been thrown out of the Garden of Eden because of lust, decided that sex was a sin, and condemned all sorts of things, including masturbation. Stone speculates that the medieval church’s condemnation of same-sex sex, a concern that emerged in the eleventh century and that became pronounced in the writings of Thomas Aquinas, was a consequence of a new requirement: clerical celibacy. According to Stone, Aquinas argued that the sins of mutual masturbation, oral sex, and anal sex were worse if they involved two members of the same sex, a position that became church dogma in the sixteenth century.

During the Reformation, Protestants redeemed one kind of sex: intercourse between a married man and woman. (Martin Luther argued that sex was as “necessary to the nature of man as eating and drinking.”) Protestants also rejected the Catholic Church’s condemnation of contraception. But they believed that governments ought to regulate sexual behavior for the sake of public order. In the seventeenth century, most of England’s American colonies had an established religion, an arrangement that, a revolution later, they abdicated.

Enlightenment philosophers rejected Christian teachings about sex, and, believing in the pursuit of happiness, they believed, too, in the pursuit of pleasure. The Constitution and the Bill of Rights say nothing about sex, of any kind, with anyone, under any circumstances. Nor do any of the original state constitutions. Nor did any laws in any of the states, at the time of the founding, forbid sexual expression, or abortion before quickening, and sodomy laws were seldom enforced. That changed in the first half of the nineteenth century, when a religious revival led states to pass new laws, including the first law against obscenity. A campaign against the long-standing practice of abortion began, followed by a crusade against contraception and, at the turn of the twentieth century, the persecution of homosexuals. The cases from Roe to Lawrence to Obergefell, Stone suggests, constitute a revolution, not a turning away but a turning back, toward the Enlightenment.

History written to win a legal argument has a different claim to authority than history written to find out what happened. In a study of sex, Stone might have been interested in any number of practices, but he has confined his investigation to matters that are sources of ongoing constitutional and political debate in the United States today: abortion, contraception, obscenity, and sodomy or homosexuality. Practices that were once crimes, like fornication and adultery, or that are still crimes, like incest, infanticide, and rape, generally lie outside the scope of his concern. This has the effect of obscuring the relationship between things he’s interested in and things he’s not interested in, and it introduces a circularity: he has defined the scope of his study by drawing a line between what’s criminal and what’s not, when how that line came to be drawn is the subject of his study.

The history of the regulation of sexuality, especially the parts he’s chosen to gloss over—which happen to be parts that particularly concern the vulnerability of women and children—is a chronicle of a staggeringly long reign of sanctioned brutality. That reign rests on a claim on the bodies of women and children, as a right of property, made by men. “The page of history teems with woman’s wrongs,” Sarah Grimk√© wrote in 1837. Stone only skimmed that page. Or consider this page, from the Congressional Record in 1866, during the debate over the Fourteenth Amendment. Jacob Howard, a Republican senator from Michigan, explained that the amendment “protects the black man in his fundamental rights as a citizen with the same shield which it throws over the white man.” Howard assured his audience that the amendment did not guarantee black men the right to vote, even though he wished that it did, and here he quoted James Madison, who’d written that “those who are to be bound by laws, ought to have a voice in making them,” at which point Reverdy Johnson, a Democrat from Maryland, wondered how far such a proposition could be extended, especially given the amendment’s use of the word “person”:

Mr. Johnson: Females as well as males?

Mr. Howard: Mr. Madison does not say anything about females.

Mr. Johnson: “Persons.”

Mr. Howard: I believe Mr. Madison was old enough and wise enough to take it for granted that there was such a thing as the law of nature which has a certain influence even in political affairs, and that by that law women and children are not regarded as the equals of men.

History isn’t a feather. It’s an albatross.

Last year, Neil Gorsuch delivered a memorial tribute to Scalia, in which he said that the Justice’s greatest contribution to jurisprudence was his commitment to historical inquiry. Gorsuch said that Scalia had reminded legal scholars that, rather than contemplating the future, “judges should instead strive (if humanly and so imperfectly) to apply the law as it is, focusing backward, not forward.”

Scalia spent much of his career arguing for the importance of history in the interpretation of the law. “If ideological judging is the malady,” Scalia said in 2010, “the avowed application of such personal preferences will surely hasten the patient’s demise, and the use of history is far closer to being the cure than being the disease.”

Gorsuch’s account of this debate is more measured. Whose history? How far back? “In due process cases, the Supreme Court has frequently looked not only to this nation’s history, but also to English common law,” Gorsuch has written. “But why stop there? Why not examine Roman or Greek or some other ancient precedent as, say, Justice Blackmun did in his opinion for the Court in Roe v. Wade? And what about contemporary experience in other Western countries?” His book on assisted suicide contains a chapter, called “The Debate Over History,” that applies the history test to the question of the right to die. He began his survey with Plato, hopscotched across the centuries, and decided that, while a consensus had grown “that suicide is essentially a medical problem,” the historical record offers, at best, limited support for the idea of a right to assisted suicide and euthanasia. Gorsuch, an eloquent and candid writer, has his doubts about the history test. He writes, “The history test, for all its promise of constraining judicial discretion, carries with it a host of unanswered methodological questions and does not always guarantee the sort of certainty one might perhaps hope for.”

Gorsuch may be dubious about the history test, but he happens to be a particularly subtle scholar of precedent. (He’s a co-author of a new book, The Law of Judicial Precedent [2016]; Scalia had been meant to write the foreword.) And he’s written powerfully about the relationship between history and the law. In 2015, Gorsuch wrote an opinion in a case that concerned Alfonzo Deniz Robles. Deniz, a Mexican citizen, twice entered the United States illegally. He married an American citizen, and had four children. In 2005, the Tenth Circuit court ruled that an immigrant in Deniz’s position was grandfathered into a lapsed program that allowed him to pay a fine and apply for residency, so Deniz applied for a visa. The government held up his application for years, and by the time it was reviewed the Board of Immigration Appeals, an executive agency, overruled the court, requiring him to leave the country for ten years before applying for residency. (“It was, like, Today you can wear a purple hat but tomorrow you can’t,” Deniz’s wife, Teresa, told me. “It was mind-boggling.”) Deniz appealed, on the ground that his rights to due process had been violated.

The appeal reached Gorsuch’s court in 2014, at which point immigration services told Deniz, as Gorsuch explained, “that he’d have to start the decade-long clock now even though if he’d known back in 2005 that this was his only option, his wait would be almost over.” Writing for the court, Gorsuch explained that judicial reasoning is always backward-looking, while legislation is forward-looking; he cited a thirteenth-century English jurist to establish that the presumption against retroactive legislation is nearly as old as common law, and the retrospective effect of judicial decisions, he said, has been established for almost a thousand years. But what about acts of the executive branch? Gorsuch said that if an executive agency is acting like a judge its rulings are retroactive, but if it’s acting like a legislature its rulings are prospective. That is, if the Board of Immigration Appeals makes a new policy, it can’t apply it to people who made choices under the old policy. The Tenth Circuit ruled in favor of Deniz. He still doesn’t have a green card. That will likely take years.

The chain of cases that are of interest to Stone in Sex and the Constitution will be revisited by a newly constituted Supreme Court, once Scalia’s replacement finally takes a seat. More immediately, though, the Court will be asked to rule on the due-process and equal-protection-violation claims made in opposition to President Trump’s early executive orders, as a matter of federal law. “A temporary absence from the country does not deprive longtime residents of their right to due process,” eighteen state attorneys general and others argued in a brief challenging the Trump Administration’s travel ban. Gorsuch’s several rulings urging restraint of the executive branch carry a particular weight in this new political moment, in which the history test is already being applied to those orders. “The framers worried that placing the power to legislate, prosecute, and jail in the hands of the Executive would invite the sort of tyranny they experienced at the hands of a whimsical king,” Gorsuch wrote in a dissent from 2015. A lot of people are still worried about that.

Alfonzo and Teresa Deniz, who live in Wyoming with their kids, have so far spent more than forty thousand dollars on legal fees. They’ve got another court date, on March 21st, the day after the Senate Judiciary Committee begins hearings on Gorsuch’s nomination. The law keeps changing. “You hear a lot of things,” Teresa told me. “It’s scary.” She’s terrified that her children will lose their father. I asked Teresa if she and her husband had ever met Neil Gorsuch. She said no. She didn’t know that he’d been nominated to the Supreme Court. I asked her if she had a message for the Court. “Look at the families,” she said. She began to cry. She said, “I just hope that they can come up with something that is justice.” ###

[Jill Lepore is the David Woods Kemper '41 Professor of American History at Harvard University as well as the chair of the History and Literature Program. She also is a staff writer at The New Yorker. Her latest books are The Story of America: Essays on Origins (2012), Book of Ages: The Life and Opinions of Jane Franklin (2013). and The Secret History of Wonder Woman (2014). Lepore earned a BA (English) from Tufts University, an MA (American culture) from the University of Michigan, and a PhD (American studies) from Yale University.]

Copyright © 2017 The New Yorker/Cond√© Nast Digital



Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License..

Copyright © 2017 Sapper's (Fair & Balanced) Rants & Raves

Tuesday, March 21, 2017

Roll Over, Ancient Chinese Curse — We Now Live In -Interesting- Dangerous Times

The final sentence in today's post brought a sense of dread to this blogger: "We live in dangerous times." Indeed. We have lived through the worst, 100-day beginning of a US presidency. The Klown Kar is being driven by a madman. What will the Stupids in Congress do when the FBI reveals that Il Douche is a T-R-A-I-T-O-R? The leakers are the true patriots in these dangerous times. The noose is growing tighter around Il Douche's treasonous neck. If this is a (fair & balanced) call for naming the enemies of the people, so be it.

[x HNN]
The Scary Parallels Between Trump And Mussolini
By Mark Bickhard


TagCrowd cloud of the following piece of writing

created at TagCrowd.com

Comparisons between Trump(ism) and Fascism have become frequent, and with good reason. These comparisons are strongest between Trump and Mussolini — stronger than with Hitler and Nazi-ism. Detailed comparisons are difficult for at least two reasons: 1) the historical circumstances are quite different between the 20s and 30s and today, and 2) Fascism was never a coherent political theory or philosophy, but, instead, was a populist and nationalist development in Italy that Mussolini did not create, but did take over. -

A comparison between Trump and Mussolini in terms of character and style, however, is frighteningly strong — and does give some guidance concerning future concerns. This comparison is based primarily on quotes from a book about Mussolini by R.J.B. Bosworth (2010). In general, the quotes speak for the themselves, though I will add some commentary along the way. It should be noted that this book was published years before similarities between Trump and Mussolini became politically relevant, and, thus, were not written with Trump in mind.

I begin with Trump’s arrogant ignorance and incoherence:

“Other more critical contemporaries noticed instead the fluctuations in Mussolini’s ideas and the way he preferred to avoid in-depth conversations, sometimes excusing himself by saying that the details should be left to the experts. Here, they discerned, was a leader more interested in imposing his will than in harmonising his attitudes or policies. Here was a politician more interested in seeming to know than in knowing.” pg 142

“He understood that a totalitarian dictator had to be, or to seem to be, expert in everything.” pg 177

“Cowing the press was only one part of building a totalitarian dictatorship.” pg 177

Bosworth points to a later developing ambition for Mussolini that is not yet overt with Trump — but it has already been hinted at by some in his inner circle:

“The real novelty of his ambition lay in his pretensions to enter the hearts and minds of his subjects, and so install Fascism as a political religion.” pg 177

Again, Trump’s ambition combined with a lack of coherence:

“and so readjusting his own history with his usual aplomb” pg 277

“ ‘Reactionary dictators are men of no philosophy, no burning humanitarian ideal, nor even an economic program of any value to their nation or the world. [George Seldes]’ They were ‘gangsters’ more than anything else.” pg 246

One striking detailed similarity:

Mussolini appointed his son-in-law as foreign minister. e.g., pg 254

Trump, of course, is infamous for his ultra-thin skin:

“… he would flick through the French press and grow enraged at any criticism of Italy and himself.” pg 272

“… there were few things which annoyed Mussolini more than overt criticism.” pg 276

“This emotion [anger] had always been a prominent part of the Duce’s reaction to life .…” pg 280

Trump and Mussolini share thin-skinned ignorance combined with arrogant contempt:

“The Duce’s version of permanent revolution, it was increasingly plain, was more a story of his own permanent sense that the rest of human kind was not made in this own image (an arrogance which only partially cloaked his own sense of inadequacy …).” pg 282

“… it was plain that he [Augusto Rosso] was another who feared that Ciano [son-in-law] was very young, and very inexperienced in the real world, and who knew that Mussolini did not take his professional diplomats seriously.” pg 292

“In his diary, Bottai depicted a war leader whose administration grew steadily more ‘approximate’, with the Duce, a ‘man of the banner headline’ at heart, now bored by detail or discussion and preferring to ‘let things run of their own accord’.” pg 302

“… the Duce’s reaction, Bottai complained, was, ‘if things go well, take the credit; and, if they go badly, to blame others’. This, Bottai concluded, had become the real meaning of the formula: ‘Mussolini is always right.’ ” pg 303

The following speaks for itself, and speaks volumes:

From A.J.P. Taylor, quoted in Bosworth: “Fascism never possessed the ruthless drive, let alone the material strength, of National Socialism. Morally it was just as corrupting — or perhaps more so from its very dishonesty. Everything about Fascism was a fraud. The social peril from which it saved Italy was a fraud; the revolution by which it seized power was a fraud; the ability and policy of Mussolini were fraudulent. Fascist rule was corrupt, incompetent, empty; Mussolini himself a vain, blundering boaster without either ideas or aims.” pg 344

Here from a different book, Mussolini and Italian Fascism (2008), by Giuseppe Finaldi:

“Thus Fascism, as it developed in 1920-2, was not a political party, with a programme and an internal structure headed by Mussolini who sent proselytizing disciples into the provinces, but a catch-all movement that, loosely speaking, would have met with the approval of many who saw themselves as belonging to the very widespread political and social environment of the Vitterio Veneters [a nationalist movement]. The ingredient that was (almost) unique to Fascism and which gave it an edge over traditional patriotic parties was its willingness to employ violence for political ends. Its ability to give a semblance of political coherence and a plausible set of symbolic reference points to what was essentially reactionary vigilantism allowed the process of law and the functioning of democracy … to be sidestepped with panache.” (pg 37)

Just as Mussolini took over the Fascist movement, Trump is exploiting and taking over the ultra-nationalism/alt-right movements. These are the power bases for two dictatorial personalities.

Two additional comparisons —one with with Hitler and one with Putin — are also relevant here. Hitler and Nazi-ism have both similarities and differences with Trump and Trumpism, but both include the style of creating multiple competing power centers, to be adjudicated by the ultimate authority. This not only creates chaos, it also encourages striving to produce the positions, actions, and proposals that will most powerfully capture what the Leader will bestow favor upon. It nurtures what came to be called “Working toward the Fuhrer.” It is a formula for extremism.

Violence is central to the history of all of these movements, and both Hitler and Mussolini came to their dictatorial powers via a relatively singular act of violence: the Reichstag fire for Hitler and the Fascist march on Rome for Mussolini.

Putin, however, demonstrates a different path. Violence, even Putin-directed lethal violence, has been a central part of Putin’s creation of his dictatorship, but there has not been any single violent event that generated his power. Instead, Putin’s history has been one of constant undermining and destruction of competing institutions and individuals, to the point that there are no longer any checks on his power. We have already seen major attacks by Trump on the judiciary, the press, and moves to undermine and take over the institutions of public safety. The seditious partisanship of the Republicans in Congress ensures that the legislative branch will not be a check — unless that blind support is somehow itself changed.

The attacks on central institutions of American democracy as “enemies of the people” has a horrible and horribly dangerous historical background. Trump may (or may not) be too ignorant to know of that background, but his inner circle most certainly knows of it, and intends it in full.

And, of course, all of this is in addition to the subversion of American democracy and of the Trump administration by Putin’s Russia.

We live in dangerous times. ###

[Mark Bickhard is Henry R. Luce Professor in Cognitive Robotics and the Philosophy of Knowledge in the Department of Psychology at Lehigh University (PA). Bickhard received three degrees from the University of Chicago: BS (mathematics), MS (statistics), and PhD (human development).]

Copyright © 2017 History News Network



Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License..

Copyright © 2017 Sapper's (Fair & Balanced) Rants & Raves

Monday, March 20, 2017

Today, Tom Tomorrow Illustrates The Endless Loop — Yuck. Very Stupid.

The past week, in national news, has been like a week in a rubber room filled with lunatics. Tom Tomorrow (Dan Perksin) explicateds> in a brief papragraph:


If you spend any time on Twitter, you’ll have noticed people arguing that topic A is a distraction from topic B, which is a distraction from topic C. It’s understandable! We’re being innundated with terribleness right now — someone, I don’t remember who, called it a Denial of Service attack on democracy itself. It's hard to keep up. For this cartoon, it was hard to narrow them down comprehensively. For instance, you may notice that the wiretapping controversy is not mentioned -- I thought about that one but decided it was already contained within "Trump's latest horrifyingly unhinged tweet," and would be redundant.

If this is a weary (fair & balanced) DNS-request for the Oval Office, so be it.


[x TMW]
So Many Distractions
By Tom Tomorrow (Dan Perksin)

Tom Tomorrow/Dan Perkins

[Dan Perkins is an editorial cartoonist better known by the pen name "Tom Tomorrow." His weekly comic strip, "This Modern World," which comments on current events from a strong liberal perspective, appears regularly in approximately 150 papers across the U.S., as well as on Daily Kos. The strip debuted in 1990 in SF Weekly. Perkins, a long time resident of Brooklyn, New York, currently lives in Connecticut. He received the Robert F. Kennedy Award for Excellence in Journalism in both 1998 and 2002. When he is not working on projects related to his comic strip, Perkins writes a daily political blog, also entitled "This Modern World," which he began in December 2001. More recently, Dan Perkins, pen name Tom Tomorrow, was named the winner of the 2013 Herblock Prize for editorial cartooning. Even more recently, Dan Perkins was a runner-up for the 2015 Pulitzer Prize for Editorial Cartooning.]

Copyright © 2017 This Modern World/Tom Tomorrow (Dan Perkins)




Creative Commons icense
This work is licensed under a Creative Commons Attribution 4.0 International License..

Copyright © 2017 Sapper's (Fair & Balanced) Rants & Raves

Sunday, March 19, 2017

Ol' Hedrick (Smith) Asks The Question O'The Day

Hedrik Smith's brief bipartisan survey of the history of presidential animus toward the news media begins with John F. Kennedy through Lyndon B. Johnson to Richard M. Nixon. It is obvious that mendacity wears no party label. From those coverup artists, Smith turns his gaze upon Il Douche who holds the record for presidential dishonesty after just 100 days in office. If this is a (fair & balanced) consideration of presidential dishonesty, so be it.

[x NY Fishwrap]
So, What Is Trump Hiding?
By Hedrick Smith


TagCrowd cloud of the following piece of writing

created at TagCrowd.com

In his short White House tenure, President Trump has already set a record for histrionic tantrums against the media — whether attacking CNN, The Washington Post, The New York Times or MSNBC for revealing his 2005 tax return, as he did last week. He’s actually pursuing a well-worn path of American presidents blaming the press for their problems.

Five decades of reporting have taught me that whenever a president starts screeching about the media, it’s a sure sign he’s in hot water and fearing revelations about some policy disaster, damaging mendacity or political villainy. Even popular presidents with reputations for charming the press occasionally stoop to blaming the press for quagmires of their own making.

John F. Kennedy, for example.

In September 1963, with the Vietnam War escalating and the pro-American authoritarian regime of President Ngo Dinh Diem besieged by popular protests, President Kennedy used a private meeting with The New York Times’s publisher, Arthur Ochs Sulzberger, and James Reston, the Washington bureau chief, to charge that David Halberstam, the Times correspondent in Saigon, was undermining the American war effort and to pressure the publisher to pull Mr. Halberstam out of Vietnam. President Kennedy was particularly angered by a stream of front-page articles by Mr. Halberstam graphically describing battlefield defeats and the self-immolations of Buddhist monks.

What the president did not know was that The Times was already planning to replace Mr. Halberstam because the editors feared that Vietnamese secret police had marked him for assassination. Because I covered Vietnam policy in Washington, I had been told to get ready to replace Mr. Halberstam.

But after the meeting, Mr. Sulzberger and Mr. Reston postponed my transfer indefinitely. The Times, they said, could not bow to pressure from a president trying to change our news coverage. Two months later, after the Diem regime was overthrown, I was sent to Saigon to replace Mr. Halberstam.

President Kennedy’s successor, Lyndon B. Johnson, intensified this adversarial strategy. He regularly railed against the press for what he and Defense Secretary Robert S. McNamara condemned as biased news coverage that challenged the administration’s line that we were winning the Vietnam War, which Mr. Johnson had expanded with air attacks on North Vietnam. When in December 1966 the Times correspondent Harrison Salisbury went to Hanoi and began filing dispatches about the civilian casualties and destruction caused by the American bombing, the administration all but accused Mr. Salisbury of treason.

The Pentagon insisted that American attacks were carried out with pinpoint precision, civilian casualties were extremely rare, and Mr. Salisbury had become a tool of Hanoi’s propaganda effort. But within months, Deputy Secretary of State Nicholas Katzenbach admitted privately to several reporters in Washington that American air raids were in fact hitting civilian-populated areas of Hanoi, Haiphong and other cities.

During the administration of the next president, Richard M. Nixon, charge and countercharge against the media escalated still further. The Nixon White House even compiled a political “enemies list” including more than 50 in journalism. To combat leaks over war policy, the White House and the FBI director, J. Edgar Hoover, ordered the wiretapping of four reporters, including me, and 14 government officials.

In 1971, my colleague Neil Sheehan obtained Secretary McNamara’s secret Pentagon history of the war, documenting chronic deception of the American people by a succession of Democratic and Republican administrations. When The Times published our articles based on the Pentagon Papers, the Nixon administration went to court to stop publication. The Times was temporarily blocked but other papers picked up the story.

Infuriated, President Nixon insisted that someone “has to go to jail” for the leak. But very quickly, the Supreme Court ruled in favor of the media, and The Times rolled out a book-length volume of articles over 10 days that would forever alter and deepen our understanding of the Vietnam War.

Today, the issues are different, of course — questions about Mr. Trump’s peculiarly warm embrace of Russia’s leader, Vladimir V. Putin, and Russian intelligence agencies meddling in the 2016 elections on Mr. Trump’s behalf. But the clash of powerful institutions is similar.

Mr. Trump’s attack on the media for publishing leaks from the FBI and domestic intelligence agencies succeeded for a few days in diverting public attention from his Russian connections. He and his White House Rasputin, Stephen K. Bannon, may also reckon that by savaging the press, they can intimidate Congress into softening its investigation into the Trump-Russia link.

But the focus has swung back on the central question: What is the president hiding? If his campaign is innocent of illicit Russian connections, why not welcome the investigation and clear the air? If, as Mr. Trump said last month, his former national security adviser, Michael Flynn, was simply “doing his job” in talking with the Russian ambassador Sergey I. Kislyak about American sanctions against Moscow, why did Mr. Flynn lie about it?

More broadly, why has Mr. Trump evaded reporters’ questions about renewed fighting in eastern Ukraine or the Russian deployment of a new missile in conflict with a 1987 arms agreement? Why, after publication of his 2005 tax returns, does he still refuse to release his most recent returns? Will they reveal something that makes him beholden to Mr. Putin and Moscow?

No matter how much the president seeks to demonize the press, these and other crucial questions will not go away because today’s journalists are just as committed as those who covered past presidents to pursue them to the end. ###

[Hedrick Smith is a former Washington bureau chief for The New York Times, author of Who Stole the American Dream? (2012) and executive editor of the website Reclaim the American Dream. Smith received a BA (history) from Williams College and did further graduate study at Oxford University as a Fulbright Scholar.]

Copyright © 2017 The New York Times Company



Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License..

Copyright © 2017 Sapper's (Fair & Balanced) Rants & Raves