A while ago, this blogger skimmed past a couple of articles that deal with academic/political conflict over the claims that today's right-wing is bent on subversion of US political institutions. Duke historian Nancy MacLean has become the latest enemy of the Koch brothers and their acolytes for her recent book Democracy in Chains (2017). Mea culpa for the omission of something that is very, very important in our troubled days. Instead, today's post comes at the issue from another direction, thanks to UCLA Professor John McCumber. If this is the (fair & balanced) attempt to nail a lot of jelly to the barn door, so be it.
[x Aeon]
America’s Hidden Philosophy
By John McCumber
TagCrowd cloud of the following piece of writing
The chancellor of the University of California, Los Angeles (UCLA) was worried. It was May 1954, and UCLA had been independent of Berkeley for just two years. Now its Office of Public Information had learned that the Hearst-owned Los Angeles Examiner was preparing one or more articles on communist infiltration at the university. The news was hardly surprising. UCLA, sometimes called the ‘little Red schoolhouse in Westwood’, was considered to be a prime example of communist infiltration of universities in the United States; an article in The Saturday Evening Post in October 1950 had identified it as providing ‘a case history of what has been done at many schools’.
The chancellor, Raymond B Allen, scheduled an interview with a ‘Mr Carrington’ – apparently Richard A Carrington, the paper’s publisher and solicited some talking points from Andrew Hamilton of the Information Office. They included the following: ‘Through the cooperation of our police department, our faculty and our student body, we have always defeated such [subversive] attempts. We have done this quietly and without fanfare but most effectively.’ Whether Allen actually used these words or not, his strategy worked. Scribbled on Hamilton’s talking points, in Allen’s handwriting, are the jubilant words ‘All is OK will tell you.’
Allen’s victory ultimately did him little good. Unlike other UCLA administrators, he is nowhere commemorated on the Westwood campus, having suddenly left office in 1959, after seven years in his post, just ahead of a football scandal. The fact remains that he was UCLA’s first chancellor, the premier academic Red hunter of the Joseph McCarthy era and one of the most important US philosophers of the mid-20th century.
This is hard to see today, when philosophy is considered one of academia’s more remote backwaters. But as the country emerged from the Second World War, things were different. John Dewey and other pragmatists were still central figures in US intellectual life, attempting to summon the better angels of American nature in the service, as one of Dewey’s most influential titles had it, of ‘democracy and education’. In this they were continuing one of US philosophy’s oldest traditions, that of educating students and the general public to appreciate their place in a larger order of values. But they had reconceived the nature of that order: where previous generations of US philosophers had understood it as divinely ordained, the pragmatists had come to see it as a social order. This attracted suspicion from conservative religious groups, who kept sharp eyes on philosophy departments on the grounds that they were the only place in the universities where atheism might be taught (Dewey’s associate Max Otto resigned a visiting chair at UCLA after being outed as an atheist by the Examiner). As communism began its postwar spread across eastern Europe, this scrutiny intensified into a nationwide crusade against communism and, as the UCLA campus paper The Daily Bruin put it, ‘anything which might faintly resemble it’.
And that was not the only political pressure on philosophy at the time. Another, more intellectual, came from the philosophical attractiveness of Marxism, which was rapidly winning converts not only in Europe but in Africa and Asia as well. The view that class struggle in Western countries would inevitably lead, via the pseudoscientific ‘iron laws’ of thesis, antithesis and synthesis, to worldwide communist domination was foreign to Marx himself. But it provided a ‘scientific’ veneer for Soviet great-power interests, and people all over the world were accepting it as a coherent explanation for the Depression, the Second World War and ongoing poverty. As the political philosopher S. M. Amadae has shown in Rationalising Capitalist Democracy (2003), many Western intellectuals at the time did not think that capitalism had anything to compete with this. A new philosophy was needed, one that provided what the nuanced approaches of pragmatism could not: an uncompromising vindication of free markets and contested elections.
The McCarthyite pressure, at first, was the stronger. To fight the witch-hunters, universities needed to do exactly what Allen told the Examiner that UCLA was doing: quickly and quietly identify communists on campus and remove them from teaching positions. There was, however, a problem with this: wasn’t it censorship? And wasn’t censorship what we were supposed to be fighting against?
It was Allen himself who solved this problem when, as president of the University of Washington in 1948-49, he had to fire two communists who had done nothing wrong except join the Communist Party. Joseph Butterworth, whose field was medieval literature, was not considered particularly subversive. But Herbert Phillips was a philosophy professor. He not only taught the work of Karl Marx, but began every course by informing the students that he was a committed Marxist, and inviting them to judge his teaching in light of that fact. This meant that he could not be ‘subverting’ his students they knew exactly what they were getting. Allen nevertheless came under heavy pressure to fire him.
Allen’s justification for doing this became known across the country as the ‘Allen Formula’. The core of it ran like this: members of the Communist Party have abandoned reason, the impartial search for truth, and merely parrot the Moscow line. They should not be allowed to teach, not because they are Marxists that would indeed be censorship but because they are incompetent. The Formula did not end there, however. It had to be thoroughly argued and rigorously pervasive, because it had to appeal to a highly informed and critical audience: university professors, whose cooperation was essential to rooting out the subversives in their midst. Ad hoc invocations of the ‘search for truth’ would not suffice. It had to be shown what the search for truth reason itself really was. Allen’s ‘formula’ thus became philosophical in nature.
Like the logical positivists of his day, Allen identified reason with science, which he defined in terms of a narrow version of the ‘scientific method’, according to which it consists in formulating and testing hypotheses. This applied, he claimed in a 1953 interview with The Daily Bruin, even in ‘the realm of the moral and spiritual life’: Buddha under the banyan tree, Moses on Sinai, and Jesus in the desert were all, it appears, formulating hypotheses and designing experiments to test them.
The Allen Formula gave universities two things they desperately needed: a quick-and-dirty way to identify ‘incompetents’, and a rationale for their speedy exclusion from academia. Since rationality applies to all human activities, the Formula could be used against professors who, like Butterworth, were competent in their own disciplines, but whose views in other fields (such as politics) had not been formulated ‘scientifically’. Moreover, and conveniently, rationality was now a matter of following clear rules that went beyond individual disciplines. This meant that whether someone was ‘competent’ or not could be handed over to what Allen called members of ‘the tough, hard-headed world of affairs’ in practice, administrators and trustees rather than left to professors actually conversant with the suspect’s field. Professors thus found themselves freed from having to deal with cases of suspected subversion. Small wonder that, according to the historian Ellen Schrecker, Allen’s actions, and his rationale for them, set a precedent for universities across the country, and catapulted Allen himself to national fame and to a new job at UCLA.
The Allen Formula was administered, at UCLA and elsewhere in California, through something called the California Plan. Imitated to varying degrees in other states, the Plan required the head of every institution of higher education in California, public and private, to send the name of every job candidate at their institution for vetting by the state senate’s committee on un-American activities. The committee would then consult its database of subversives and inform the university whether the candidate was in it. What to do next was, officially, up to the university; but the committee’s policy was that if an identified subversive was actually hired, it would go public, issuing subpoenas and holding hearings. As Schrecker notes in No Ivory Tower (1986), no college could hope to deal with such publicity, so the Plan effectively gave the committee ‘a veto over every single academic appointment in the state of California’.
The California Plan was supplemented at the University of California by a memo in April 1952 from President Robert G Sproul to department chairs and other administrative officers, directing departments to canvass the publications of job candidates to make sure that they ‘prohibited the employment of persons whose commitments or obligations to any organisation, communist or other, prejudiced impartial scholarship and teaching and the free pursuit of truth’. As the language here makes clear, it is not merely communists who are the problem, but anyone who is not ‘impartial’. Sproul, like other academics, followed the Allen Formula.
This official emphasis on scientific impartiality excluded adherents of a number of influential philosophical approaches from employment in California. Non-communist Marxists whose beliefs reposed on readings of history rather than on logic and mathematics were said to have abandoned what was rapidly defined as philosophy’s ancient concern with strict objectivity in favour of what Allen called ‘leading parades’. Existentialists and phenomenologists did not follow the experimental method (and the former tended to be atheists as well). Many pragmatists did not even believe that there was a single scientific method: true to their name, they believed that scientific enquiry should be free to apply whatever procedures worked. Moreover, whether a method ‘worked’ or not in a given case should be a matter of its social benefit, a dangerously collectivist standard in those difficult days. It was far safer to see the scientific enterprise as what Allen called it in Communism and Academic Freedom (1949): a ‘timeless, selfless quest of truth’.
The California Plan operated in the greatest secrecy. Ending someone’s career in public required extensive justification, multiple hearings, and due process, all of which could provoke damaging public outcries. The need for secrecy also explains why the Plan emphasised preventing hires rather than rooting out subversives already in teaching positions. As the committee noted in its annual report for 1953, professors already on campus had networks of friends and supporters. Efforts to remove them often produced loud backlashes which, in the committee’s view, invariably benefitted the Communist Party.
According to its advocates, the Plan was a great success. In March 1952, 10 months after it was implemented, the committee’s staffer, Richard Combs, estimated that it had prevented about one academic hire per day in the state. The next year, Allen himself declared that ‘so far, the arrangement is working to mutual advantage’.
As long as Allen remained chancellor, the Plan’s secrecy was successfully maintained at UCLA. Two years after he left, however, attacks resumed: the anthropologist John Greenway was fired in 1961 for suggesting that the Roman Catholic Mass exhibited traces of cannibalism. Three years after that, the philosopher Patrick Wilson was denounced by leading Los Angeles clergymen for the way he taught philosophy of religion. The seven years of silence while Allen served as chancellor at UCLA are testimony to his, and the Plan’s, success at tamping down controversy. We will never know, of course, the number of job candidates who lost their careers before they even started.
Things took a different turn at the university’s other campus, Berkeley. Unlike Allen, Berkeley’s chancellor, Clark Kerr, refused to cooperate with the Plan with the result that, unbeknown to Kerr, a university security officer named William Wadman took it over. Wadman’s view of his job went well beyond merely forwarding the names of job candidates. It amounted to a general political policing of the faculty, and this attracted national attention. In March 1954, after Wadman’s activities became public, an article in the far-off Harvard Crimson quoted Richard Combs: ‘If, after looking over charges against a professor and investigating them, Wadman thinks the man should be removed, he goes to the state committee and discusses the case. If the... committee agrees with him, the information is passed on to the president of the university [Sproul], who calls for the professor’s resignation.’
The initiative in this arrangement clearly belonged to Wadman. The committee itself was known to be rabidly anti-communist and eager to justify its existence by capturing ‘subversives’, while Sproul’s assent to its findings is portrayed as virtually automatic. The Crimson article goes on to summarise Combs as saying that ‘any professor in the college – not merely those in classified research can be dealt with in this manner’. Which means, if true, that every professor in the college not just those in classified research owed his job to the benign disregard, at least, of Wadman.
As all this was happening, US academics also faced the task of coming up with a philosophical antidote to Marxism. Rational choice theory, developed at the RAND Corporation in the late 1940s, was a plausible candidate. It holds that people make (or should make) choices rationally by ranking the alternatives presented to them with regard to the mathematical properties of transitivity and completeness. They then choose the alternative that maximises their utility, advancing their relevant goals at minimal cost. Each individual is solely responsible for her preferences and goals, so rational choice theory takes a strongly individualistic view of human life. The ‘iron laws of history’ have no place here, and large-scale historical forces, such as social classes and revolutions, do not really exist except as shorthand for lots of people making up their minds. To patriotic US intellectuals, rational choice theory thus held great promise as a weapon in the Cold War of ideas.
But it needed work. Its formulation at RAND had been keyed to the empirical contexts of market choice and voting behaviour, but the kind of Marxism it was supposed to fight basically, Stalinism did not accept either free markets or contested elections as core components of human society. Rational choice theory therefore had to be elevated from an empirical theory covering certain empirical contexts into a normative theory of the proper operation of the human mind itself. It had to become a universal philosophy. Only then could it justify the US’ self-assumed global mission of bringing free elections and free markets to the entire world.
Scientific method was already installed as coextensive with reason itself philosophically by the logical positivists, and politically by the Allen Formula. All that was needed was to tie rational choice to the scientific method. This was accomplished paradigmatically by the UCLA philosopher Hans Reichenbach’s book The Rise of Scientific Philosophy (1951). In a crucial paragraph, Reichenbach wrote:
a set of observational facts will always fit more than one theory.... The inductive inference is used to confer upon each of these theories a degree of probability, and the most probable theory is then accepted.
Facts always underdetermine theories, and this requires scientists to choose from an array of alternative theories, under a preference for highest probability. Science thus becomes a series of rational choices. Which meant that by 1951 there was a unified intellectual response to the two pressures: appeals to science fought the domestic subversives, and when science was integrated with rational choice theory it entered the global conflict. The battle was on, and what I call Cold War philosophy began its career, not only in fighting the Cold War of ideas, but in structuring US universities and US society.
To be sure, interest in the California Plan seems to have petered out well before California’s anti-communist senate committee was disbanded in 1971. Even before then, the Plan was not entirely successful, as witnessed by the hiring in 1964 of the Marxist philosopher Herbert Marcuse to the philosophy department at the University of California at San Diego. That hiring was not without problems, however; public outcries against Marcuse culminated, in 1968, in armed guards, organised by his graduate students, spending the night in his living room.
But to say that with the waning of McCarthyism Cold War philosophy itself vanished from the scene is far too simplistic. The Cold War lasted until the dissolution of the Soviet Union in 1991, and Cold War philosophy is still with us today. Thus, humanists long ago abandoned McCarthy-era attempts to subject their work to scientific method (as New Criticism was held to do). But in universities at large, intellectual respectability still tends to follow the sciences.
Cold War philosophy also continues to structure US society at large. Consider the widespread use of multiple-choice tests for tracking students. Whether one takes an ACT or a SAT, one is basically being tested on one’s ability to choose, quickly and accurately, from a presented array of alternative answers under a preference, of course, for agreement with the test designers. Rational choice thus became the key to one’s placement in the national meritocracy, as illustrated by what I call the ‘40’s test’: if you know that someone has got 440, 540, 640 or 740 on the SATs (under the scoring system in effect until March 2016), you usually know a lot about their subsequent life. Someone who scored a 440, for example, likely attended a community college or no college, and worked at a relatively humble job. Someone with a 740 was usually accepted into an elite university and had much grander opportunities. Many countries, of course have meritocracies but few pin them as tightly to rational choice as the US does.
Cold War philosophy also influences US society through its ethics. Its main ethical implication is somewhat hidden, because Cold War philosophy inherits from rational choice theory a proclamation of ethical neutrality: a person’s preferences and goals are not subjected to moral evaluation. As far as rational choice theory is concerned, it doesn’t matter if I want to end world hunger, pass the bar, or buy myself a nice private jet; I make my choices the same way. Similarly for Cold War philosophy but it also has an ethical imperative that concerns not ends but means. However laudable or nefarious my goals might be, I will be better able to achieve them if I have two things: wealth and power. We therefore derive an ‘ethical’ imperative: whatever else you want to do, increase your wealth and power!
Results of this are easily seen in today’s universities. Academic units that enable individuals to become wealthy and powerful (business schools, law schools) or stay that way (medical schools) are extravagantly funded; units that do not (humanities departments) are on tight rations. Also on tight rations nationwide are facilities that help individuals become wealthy and powerful but do not convey competitive advantage on them because they are open to all or most: highways, bridges, dams, airports, and so on.
Seventy years after the Cold War began, and almost 30 after it ended, Cold War philosophy also continues to affect US politics. The Right holds that if reason itself is rooted in market choice, then business skills must transfer smoothly into all other domains, including governance an explicit principle of the Trump administration. On the Left, meritocracy rules: all three of Barack Obama’s Supreme Court nominees attended law school at either Harvard (as Obama himself did) or Yale (as Hillary Clinton did). The view that choice solves all problems is evident in the White House press secretary Sean Spicer’s presentation of the Republican vision for US health care, at his press briefing last March 23: ‘We’ve lost consumer choice.... The idea is to instill choice back into the market.’
Part of the reason for Cold War philosophy’s continuing dominance is that though it is really a philosophy, proffering a normative and universal theory of correct reasoning, it has never been directly confronted on a philosophical level. Its concern with promulgating free markets and contested elections gave it homes in departments of economics and political science, where it thrives today. Philosophers, for their part, have until recently occupied themselves mainly with apolitical fields such as logic, metaphysics and epistemology.
On a philosophical level, however, Cold War philosophy has some obvious problems. Its ‘ethics’, for example, is not a traditional philosophical ethics at all. From Plato to the pragmatists, philosophical ethics has concerned the integration of the individual into a wider moral universe, whether divine (as in Platonic ethics) or social (as in the pragmatists). This is explicitly rejected by Cold War philosophy’s individualism and moral neutrality as regards to ends. Where Adam Smith had all sorts of arguments as to why greed was socially beneficial, Cold War ethics dispenses with them in favour of Gordon Gekko’s simple ‘Greed is good.’
Another problem with Cold War philosophy’s ethics concerns what I will call ‘disidentification’. Whatever I choose has at least one alternative; otherwise there would be no choice. But if I identify myself at the outset with any of my plurality of alternatives, I cannot choose any alternative to it; doing that would end my identity and be suicidal, physically or morally. Therefore, any alternative I consider in the course of making a rational decision is something I can walk away from and still be me. This is not an issue for rational choice theory, which concerns cases where my identity is not at stake, such as choosing which brand of toothpaste to buy, or (usually) which candidate to vote for. But when rational choice theory becomes Cold War philosophy, it applies to everything, and everything about me becomes a matter of choice.
This in turn leads me to abandon my own identity, in the following way: suppose that what I am choosing is my religion, and that my alternatives are Catholicism and Hinduism. If I am already a Catholic, however, Hinduism cannot be a serious alternative, because one’s religion is (usually) part of one’s identity. If I am to choose between Catholicism and Hinduism, I must put both at a distance. I must ‘disidentify’ with them. And since Cold War philosophy bids us to take this stance on all things, at the limit the moral agent must be disidentified from everything, and can have no other fundamental identity than being a rational chooser, ie someone who first orders her preferences according to transitivity and completeness, and then opts for the highest utility. That is a pretty thin identity. Everyone has certain characteristics that they simply cannot or will not relinquish under any circumstances. What else is there to live for?
The widespread success of rational choice theory, coupled with the problems of Cold War philosophy, suggests that the problem lies in what differentiates the two: Cold War philosophy’s claim, inherited from Allen, to universal, and indeed sole, validity as an account of human reason. If we look at the history of philosophy, reason has been many things. For the Greeks, it was basically the capacity to grasp universals to see present givens as instantiations of underlying structures. For RenĂ© Descartes, it was the ability to provide an a priori and so ‘unshakable’ foundation for beliefs. For Immanuel Kant, it was the ability to generalise conceptions to the maximum, which provided the foundation for the absoluteness of the moral law. Similarly, freedom has not always been merely a matter of choice. For Aristotle, you act freely, are responsible for an action, when you desire to perform that action and your reason tells you it is the correct action in the circumstances. To act freely is thus to act from your entire moral being. This idea, that freedom is really the capacity for complete self-expression, is summed up in Hegel’s pithy remark that true freedom is the apprehension of necessity: it is to understand, in a particular situation, what it is that you have to do in order to be you.
None of this suggests that we should stop valuing freedom of choice. But we should stop assuming that making choices amounts to freedom itself, or that making them rationally is the whole job of human reason. Freedom of choice, like free markets and contested elections, is valuable only when situated within wider horizons of value. Divorced from them, it becomes first absolute and then disastrous. Free markets, for example, are wonderful tools for enhancing human life. So are MRIs; but you can’t just drop an MRI on a street corner and expect it to function. Both kinds of device require proper installation and constant tending. The penalties for ignoring this became evident in the financial crisis of 2008.
The absolutising of things such as freedom of choice the view that free markets and contested elections suffice for a good society is a view that came into prominence with the early Cold War, when the proliferation of choices was our main contrast with Soviet Marxism. In reality, there is much more to a good society than the affordance of maximum choice to its citizens. With market fundamentalism dominating the US government, and with phantasms being paraded in the media under the sobriquet of ‘alternative facts’ that you can choose or reject, forgetfulness of the McCarthy era and the Cold War philosophy it spawned is no longer a rational option. # # #
[John McCumber is a Distinguished Professor and chair of the Germanic Languages Department at the University of California at Los Angeles (UCLA). Most recently, he has written The Philosophy Scare: The Politics of Reason in the Early Cold War (2016). See other books by McCumber here. He received a BA (philosophy) from Pomona College (CA) and both an MA and PhD (philosophy) from the University of Toronto.]
Copyright © 2017 Aeon Media Group
This work is licensed under a Creative Commons Attribution 4.0 International License..
Copyright © 2017 Sapper's (Fair & Balanced) Rants & Raves
No comments:
Post a Comment
☛ STOP!!! Read the following BEFORE posting a Comment!
Include your e-mail address with your comment or your comment will be deleted by default. Your e-mail address will be DELETED before the comment is posted to this blog. Comments to entries in this blog are moderated by the blogger. Violators of this rule can KMA (Kiss My A-Double-Crooked-Letter) as this blogger's late maternal grandmother would say. No e-mail address (to be verified AND then deleted by the blogger) within the comment, no posting. That is the (fair & balanced) rule for comments to this blog.