Thursday, September 04, 2003

2.5M Manufacuring Jobs? No Wonder W Wants A Deputy Secretary of Commerce(?) for Manufacturing Employment

W has asked Secretary of Commerce Donald Evans to appoint an assistant secretary to focus on the needs of manufacturers. What about the needs of WORKERS, W? The poor sumbitch knows that he's in trouble on the loss of manufacturing jobs, so he appoints a Deputy Secretary of Commerce? The next time your house is on fire, call Animal Control. The Left in Madison, WI is right: we need a new president. As was the case in 1932, anyone is better than the Republican candidate. Can the Democrats find a paraplegic? If this be treason, make the most of it.



[x Capitol Newspapers, Madison, WI]
An editorial
September 3, 2003

Since President Bush assumed office on Jan. 20, 2001, the United States has lost more than 2.5 million manufacturing jobs - roughly 50,000 of which came from Wisconsin. When the president should have been focused on the hemorrhaging of vital industries, he instead poured his energies into cutting taxes for the super-rich. The manufacturing job losses are the tip of the economic iceberg. According to the U.S. Labor Department, more than 9 million Americans are looking for work. And while there has been some upturn in manufacturing orders in recent days, there is little evidence in the lives of unemployed Americans to suggest that the United States has turned the corner out of Bush's recession.

However, now that the 2004 election season is just around the corner, the president says he "gets it." "There's a problem with the manufacturing sector," Bush told a Labor Day rally in Ohio, where he wore a union-made costume that looked almost as silly as the flight suit he pulled on to declare in May that the United States had accomplished its mission in Iraq.

Bush coupled his newfound understanding - or should we call it his "election season conversion"? - with a pledge to activate a federal government that has paid little attention to dislocated workers.

Performing his best FDR imitation, Bush announced, "We have a responsibility that when somebody hurts, the government has to move."

But how should the government move? Hopefully not with more of the same.

Bush trade representatives continue to aggressively push for international trade agreements that have been proven to undermine job security, wages, benefits and environmental protections not just in the United States but in the countries we trade with. The "race to the bottom" that began with the North American Free Trade Agreement will be accelerated by a Free Trade Area of the Americas agreement that critics correctly refer to as "NAFTA on steroids." So if government wants to aid American manufacturing workers, step one is to step back from the FTAA.

Bush economic aides, and their allies in the Congress, continue to push for a trickle-down economics approach that relies on tax cuts for the rich to restart the economy. But two years into the experiment, the trickle-down theory is failing just as miserably as it did when Ronald Reagan was president.

The trick is to get money to working people who actually spend it, rather than to millionaires who bank their tax rebates - or use them to shift business operations to the Bahamas. So if government wants to aid American manufacturing workers, step two is to put the brakes on implementation of tax cuts for the rich and shift the money to unemployment benefits and job creation.

If President Bush took these two simple steps, people might believe him when he says, "I believe there are better days ahead for people who are working and looking for work."

Then again, if he fails to take these steps, working people and people looking for work might just bring on the better days by electing a better president.

Copyright © 2003 Capital Newspapers


Immaculate Conception: From the Left & From the Right

OK, OK. Politics makes strange bedfellows. Molly Ivins, one minute. A National Review editor, the next minute. In the interest of fairness & balance — watchwords of this Blog — I will grant Garfinkle several of his points. We in the United States have a parochial view of the world. We in the United States do not deal well with complicated issues. What I like about the Garfinkle essay is his willingness to flail the Right and the Left equally. I disliked his defense of W, particularly this sentence: ...An administration has a right to its own rhetoric, and with it a right to put some distance between that rhetoric and its actual conduct. W's rhetoric is babble. I do not think for a MINUTE that W differentiates between his rhetoric (Bring 'em on! and actual conduct (as long as it's someone else taking 'em on). However, the point is well-taken. W and the rest of us don't have a clue as to what is good policy v. bad policy. W will pay the price in the history books later this century. If this be (fair & balanced) sedition, make the most of it.


Foreign Policy Immaculately Conceived
By Adam Garfinkle

For most normal people most of the time, thinking about U.S. foreign policy is an Andy Warhol sort of experience, which is to say that for about 15 minutes every few months (or years, for some), a foreign policy matter becomes “famous” in their consciousness. When a talented but untutored journalistic mind focuses on a foreign policy issue, particularly one that editors will pay to have written about, an amazing thing sometimes happens: All of a sudden, crystalline truth rises from the clear flame of an obvious logic that, for some unexplained reason, all of the experts and practitioners thinking and working on the problem for years never saw. This is the immaculate conception theory of U.S. foreign policy at work.

The immaculate conception theory of U.S. foreign policy operates from three central premises. The first is that foreign policy decisions always involve one and only one major interest or principle at a time. The second is that it is always possible to know the direct and peripheral impact of crisis-driven decisions several months or years into the future. The third is that U.S. foreign policy decisions are always taken with all principals in agreement and are implemented down the line as those principals intend — in short, they are logically coherent.

Put this way, of course, no sentient adult would defend such a theory. Even those who have never read Isaiah Berlin intuit from their own experiences that tradeoffs among incommensurable interests or principles are inevitable. They recognize that the urgent and the imminent generally push out the important and the eventual in high-level decision making. They know that disagreement and dissension often affect how public policy is made and applied. More than that, any sober soul is capable of applying this elemental understanding to particular cases if he really puts his mind to it.

Oh, really? Not so for some, apparently, when grinding axes against a deadline. Prominent examples from the recent history of U.S. policy in Southwest Asia — examples that still bear on current matters, as it happens — show that the immaculate conception theory is alive and all too well. It also shows, mindless aphorism notwithstanding, that hindsight is not necessarily 20/20.

How many times have we heard the clarion claim that the covert U.S. effort to aid the Afghan mujahedeen through the Pakistani regime during the 1980s was, in the end, a terrible mistake because it led first to a cruel Afghan civil war and then to the rise of the Taliban? I have lost count.

This argument is about as cogent as saying to a 79-year old man — Ralph, let’s call him — that he should never have gotten married because one of his grandsons has turned out to be a schmuck. But a person does not consider marriage with the character of one of several theoretical grandchildren foremost in mind. It was not possible at the time of the nuptials for Ralph to have foreseen the personality quirks of a ne’er-do-well son-in-law not yet born; so, lo and behold, the fine upbringing that he bequeathed to his children somehow got mangled in translation to the next generation. These things happen.

Similarly, in 1980, when the initial decision was made (in the Carter administration, by the way), to establish links with the mujahedeen, the preeminent concern of American decision makers was not the future of Afghanistan, but the future of the Soviet Union and its position in Southwest Asia. Whatever the Politburo intended at the time, the consolidation of Soviet control in Afghanistan would have given future Soviet leaders options they would not otherwise have had. In light of the strategic realities of the day, the American concern was entirely reasonable: Any group of U.S. decision makers would have thought and done more or less the same thing, even if they could have foreseen the risks to which they might expose the country on other scores.

But, of course, such foresight was impossible. Who in 1980 or 1982 or 1985 could have foreseen the confluence of events that would bring al Qaeda into being, with a haven in Afghanistan? The Saudi policies that led to bin Laden’s exile and the Kuwait crisis that led to the placement of U.S. forces on Saudi soil had not yet happened — and neither could have been reasonably anticipated. The civil strife that followed the exit of the Red Army from Afghanistan, and which established the preconditions for the rise of the Taliban government, had not yet happened either. Of course, despite the policy’s overall success in undermining the Soviet position in Afghanistan, entrusting Pakistan’s Inter-Services Intelligence Directorate to manage aid to the mujahedeen turned out to be problematic, but who of the immaculate conception set knows whether there were better alternatives available at the time? There weren’t; a tradeoff was involved, and it was a tradeoff known to carry certain risks.

True, the United States walked away too soon from Afghanistan after the Red Army departed in 1989, but the Berlin Wall was falling and it seemed that more important issues were at hand. (And they were more important.) Besides, pace the third premise of the theory of the immaculate conception, there was disagreement among administration experts as to what would happen in Afghanistan. One prominent insider, born in Afghanistan, was confident that things would not go sour. He was mistaken, but his assessment was not unreasonable. These things happen.

Another example of immaculate conception-style analysis, also very well worn, specifically concerns the shah of Iran; but this example has a generic character known as the “friendly tyrants” problem. The particular claim is almost endlessly made that it was a terrible mistake for the cia to have overthrown Mohammed Mossadegh in 1953 to restore the shah to his Peacock throne, for that, it is averred, is what brought the Ayatollah Khomeini to power and sired the disaster of 1978-79 (and, one could reasonably add, the disaster of 1979 to present). The generic “friendly tyrants” argument is now applied widely if thoughtlessly to U.S. support over the years for undemocratic regimes in Egypt, Jordan, Saudi Arabia, Morocco, Pakistan, and other Muslim countries. The argument, such as it is, goes like this: The people in these countries hate the United States because the U.S. government is complicit in their being repressed by ugly and incompetent regimes. Just as in Iran, they predict, we’ll be sorry one day — “after the revolution comes” — for ever having helped the “bad guys.”

Now, the fall of the shah and the rise of the Islamic Republic is a complex case; one could write an entire book about it. Indeed, some have done just that — and all of the serious books on the subject show that the immaculate conception theory has it wrong. American interests in Iran in the early 1950s (the broader Western interest, too; the British had as much to do with the fall of Mossadegh as did the United States) had to do with Cold War geopolitics. Mossadegh was anti-Western by rhetoric and policy disposition. When he came to power in 1951, the Truman administration worried, particularly in light of Soviet behavior in northern Iran after World War ii, that a populist regime of that sort would end up being allied with or suborned by the Soviet Union. The coming of the Eisenhower administration did not allay U.S. fears as Mossadegh’s policies became ever more worrisome, and so a plan devised mostly before Eisenhower took office went successfully forward with the result that Iran remained a bulwark of Western defenses in the Middle East and Southwest Asia for the critical next quarter-century.

It is easy now to dismiss this benefit as “mere,” but it was not mere at the time. The pro-Western orientation of Iran from a time before Suez until the Camp David Accords was of enormous value to U.S. and Western statecraft. Not only did it make military planning for Southwest Asia a far less onerous task for the United States than it would have been if Iran had been a Soviet client, but it helped balance regional geopolitics for important American allies, notably Turkey and Israel. The shah’s relatively moderate hand in the development of opec, at least until 1974, was also of immeasurable benefit to the postwar international economy. Playing back history in the counterfactual tense is a frustrating and often futile exercise, but as thought exercises go, trying to imagine how constrained U.S. policies might have been with Iran on the wrong side of the Cold War is sobering. Whether one takes the Suez crisis (which was botched enough from the American side as it was), the twin Middle East crises of 1958 over revolution in Iraq and near-civil war in Lebanon, or the 1967 and 1973 Arab-Israeli wars, U.S. options military and diplomatic would have been much impoverished. Indeed, the whole face of U.S. policy in the region, which was overall a very successful one, would have looked very different and likely much more dour.

Key to the immaculate conception case with regard to Iran is the rather flat portrait presented of the shah’s moral and political debility. The United States cannot be guilty by association with tyranny in the eyes of the tyrannized unless the protected ruler is indeed vile. But the shah was not vile, and he was not unpopular for most of his tenure. His repressive tendencies came fairly late, after he had lost several trusted and wise advisors and after he became ill with the cancer that eventually killed him. For most of his rule he tried to emulate his father, Reza Shah, as a modernizer — and with the White Revolution he made much progress in that regard. He dispossessed the clerical estates and tamed the power of the landed aristocracy. As serious students of modernization know (alas, that leaves out almost all write-from-the-hip journalists), land reform is absolutely essential to economic and eventually political modernization; by almost any standard the shah’s efforts in this regard were impressive. Iranian modernization as directed from the Peacock throne probably went farther and was more sustainable than any that Mossadegh and his disputatious colleagues and successors could have achieved. Indeed, had Iran come under a form of even limited neo-imperial Soviet influence like that from which Egypt, Algeria, Iraq, Syria, and other countries have so much suffered, its “reforms” might have actually been retrogressive.

More than that, though the immaculate conceptionists tend not to know it, the shah granted the vote to women in 1964. It was this act that first galvanized clerical opposition to the regime and was the catalyst for the first occasion upon which Ruhollah Khomeini went out and got himself arrested. We know how the story turned sad in 1978, but the success of the shah’s reforms went so deep in Iranian society that the rule of the Islamic Republic will, in the end, not stick. Perhaps the best illustration of this is that the mullahs have not dared suggest that the vote be taken away from women, though this is precisely what their theology would mandate. The clerical regime’s reticence on this score defines a significant limit, a social red line, that leaves open a dynamic in which the empowerment of women may well drive Iranian society toward pluralism, the flowering of liberal constitutionalism, and eventually democracy.

Even that is not quite all. Immaculate conception theorists hold that once the shah was restored, his repressive misrule made the Ayatollah Khomeini inevitable. Not only is the shah’s repression distorted and exaggerated in their telling of it, but it was the bungling of the Carter administration that allowed the clerics to seize power. Illustrating the difference between an ignoramus and a fool, some of that administration’s cabinet members not merely believed — they actually said it publicly — that Ayatollah Khomeini was a “saint” who would soon retire from politics. Worse, the administration actively dissuaded the Iranian military, via the infamous Huyser mission among other modalities, from preventing the mullahs from taking power. Supporting the shah was good policy. Failure to adjust when the shah’s touch slipped was unfortunate but not fatal. The mismanagement of the endgame was disastrous, but it was also entirely avoidable.

This is not the place to rehearse the larger “friendly tyrants” debate; only suffice it to say that since countries such as Egypt and Saudi Arabia were undemocratic long before the United States ever began to support their governments, the argument that the United States is somehow responsible for their being undemocratic is a little hard to follow. However they came to be undemocratic, U.S. support does implicate us in their misanthropies — true enough. But once again, it is a mistake to think that one and only one set of interests is at play at one time. The proper question to ask is: What have been the interests and principles — plural — at issue, and what have been the available alternative policy choices to deal with them, particularly when a given action may advance one interest at the cost of retarding the achievement of another? Was the United States ever in a position merely to wave its hand and bring democracy to Egypt or Saudi Arabia? Would it have been responsible to try to do so in light of the other strategic interests we held in common with these countries — the utility of their anti-Soviet postures, their role in preserving the stability of the Egyptian-Israeli peace treaty, their contribution to moderating the price of oil and hence aiding the health of the international economy, and others besides? Not only were these no small matters, neither were they bereft of moral implications. It is not morality but moral posturing to wear human rights concerns on one’s sleeve, indicating as it usually does one’s favoritism for intentions over consequences, while simultaneously presuming that concern for the structural elements of international peace and prosperity are the domain of the cold, gray overseers of corporate and national equities. This verges on ethical illiteracy.

In any event, the answer to both aforementioned questions is plainly “no,” and any group of responsible American decision makers, sitting in the seats and seeing the world as those decision makers must have sat and seen, would have reached the same conclusions. Could we have attained a better balance between our strategic interests and our democratic principles in relations with such countries? No doubt we could have and should have. It is true, too, that a certain condescension toward Middle Easterners and their social and political proclivities was in quiet evidence. (President Reagan broke through this condescension once, in spectacular if limited fashion, when during the 1985 Achille Lauro tragedy he contended against Egyptian pleas that Americans “understand” the Arab cultural definition of a “white lie” with the assertion that Egyptians were perfectly capable of understanding the American view as well.) The point, of course, is this: In the pinch during the Cold War, when major decisions had to be made, U.S. decision makers (even in the Carter administration) never allowed democratic reform and human rights concerns to trump all other interests. This was particularly so in countries where democracy was not ingrained in local political culture and where U.S. efforts were therefore unlikely to bear fruit. Of course this was the right approach to take, for the Cold War was the preeminent moral stake at play in the world. Of what value would an accumulation of moralist gesticulations have been against the survival of Soviet power?

But that was then, and this is now. Cold War habits have died hard in some places; in more than a few, unfortunately, they are breathing still. What Pakistan represented to the United States during the Cold War, when India was a Soviet ally and China was a tacit ally of the United States also allied with Pakistan, was one thing. What Pakistan represented after 1991 was something else, but the U.S. policy establishment was slow to mark and act upon the change. That establishment has been even slower to reckon the meaning of the end of the Cold War for U.S. interests in Korea and Northeast Asia generally. It made sense to risk war and to pay other costs in Korea between 1953 and 1991, when that peninsula was tied to a larger stake; if it has made sense since 1991, the logic has not been demonstrated, to say the least. Should we have adjusted policy toward Arab and other “friendly tyrants” after the end of the Cold War, when the balance of our interests changed? Absolutely. Since September 11, 2001, this conclusion has finally begun to sink in, as has a sense of regret that the U.S. government did not reach it sooner. Mistakes have indeed been made, but not the ones the immaculate conceptionists cite.

These examples of the immaculate conception theory of U.S. foreign policy tend to come from the political left, where the attack on realism’s supposed interest-based bias in favor of a values-based one has long succumbed to the “mass” production techniques of modern journalism. (Just who really acts in the service of values is not all that clear, as suggested above, but never mind.) The theory of the immaculate conception is not limited to the left, however, as the mother of all examples — the end of the Gulf War in late winter 1991 — demonstrates.

It has become axiomatic in many right-of-center circles that the decision not to march to Baghdad and bring down the Baath Party in February 1991 was a terrible mistake. Some in the George H.W. Bush administration believed that at the time, and President Bush himself did subsequently acknowledge some misjudgments — though not the decision to abjure an occupation of Baghdad. Rather, the former President Bush admitted that the exact terms of the ceasefire and the failure of the United States to assist the Kurdish and Shia uprisings were connected mistakes. He has it exactly right.

There were good and not-so-good reasons for the decision not to march to Baghdad. Judging from what was known then — not three months or two years or ten years afterwards — it is both possible and still important to ponder them. Indeed, given America’s current undertaking in Iraq, it is, or ought to be, irresistible.

A reason given at the time to stay away from Baghdad was that the United States had U.N. Security Council authorization only to liberate Kuwait, not to invade Iraq. This was true, technically, and the United States does need to be mindful of how its actions affect the systemic contours of international law, an institution that benefits us more than it benefits most others. That said, this was not a strong argument. Had the administration viewed other considerations differently, the U.N. barrier alone would not have stopped it.

Among those other considerations was the concern that the United States could not count on a continuation of very low casualties among coalition forces once the battle was moved to Iraqi soil. That concern was predicated on the fear that the battle would not remain conventionally fought if the regime in Baghdad concluded that its days were literally numbered. The worry was that higher casualties would undermine the coalition and reduce public support for the war. We do not know whether the Hammurabi or the Medina divisions of the Iraqi Republican Guard would have fought well on their own soil in 1991. Nor is it certain that the Iraqis would have used chemical or biological weapons against U.S. forces. But the possibilities were not so far-fetched that responsible American decision makers would have failed to take them seriously.

A third argument, now all but forgotten, was that Iraq was a Soviet client, and to occupy Iraq would be to humiliate the Soviet Union, with whom we hoped to create a new world order. This, of course, was an argument that some people are glad is now forgotten, since it reveals the fact that some key American decision makers suffered a massive failure of imagination. They could not conceive of a world without the Soviet Union, which lasted only another 10 months. At the time, however, given the premise of a surviving ussr, this too was not an unreasonable consideration.

Yet another argument was that bringing down the Baath would splinter Iraq and, via its Shia population, provide a major advantage to Iran at Saudi expense. People disagreed then, and still disagree, about the plausibility of this scenario. No serious regional expert has ever credited it; all agree that Arab and Persian blood is thicker than sectarian Islamic water and that Iraqi Shia are not unionist-minded with Iran. The administration at the time took this danger seriously, however, not least because its policy was ultimately Saudi-centered, not Kuwaiti-centered — and the Saudis were adamant about this danger.

But why? Do the Saudis understand their own neighborhood less well than Western experts? Of course not. The Saudis were not worried by Iranian irredentism per se, but as good Muslims they follow Abu Bakr’s admonition to know one’s genealogies, not to be as the ignorant peasants who say they come from this or that place. It was not, and is not, borders that worry the Saudis, but sectarian loyalties. They very much prefer a Sunni-dominated authoritarian Iraq than a looser, more populist Shia Iraq. They know that Shia are the majority in Iraq, and they have watched in recent years as a similarly despised and downtrodden Shia Arab minority in Lebanon has risen to significant political status. They do not want to see such a thing repeated on their northern border, not just because the religion of the Wahabbis abhors Shia saint cults and theological apostasy, but because probably the majority population in Al-Hasa province, where most of Saudi Arabia’s oil lies, is Shia. They fear sectarian contagion from a Shia-dominated Iraq, not a split, territorially fractured Iraq. But they fed American leaders a line in 1991, and those leaders appear to have swallowed it whole.

Finally, some argued that an American-led occupation of an Arab capital would exhume all of the gruesome historical demons of the previous three centuries of Muslim-Western conflict from their netherworld haunts, making the United States the focus of enormous resentment and hatred throughout the Arab and Muslim worlds. We might think of ourselves as liberators, and the Iraqis might have welcomed us the morning after. But then how to set up a new regime and take our leave without undermining that regime’s nationalist credentials? (Hint: This is difficult.) How to create a government decentralized enough to accommodate Iraq’s ethnic and sectarian heterogeneity but not so decentralized as to tempt Iraq’s neighbors to turn the country into a souk for espionage, smuggling, and general mayhem? (Hint: This is even more difficult.) How to build a defense force after the fact that is capable of defending Iraq from large neighbors like Iran but not at the same time “too” large juxtaposed against smaller neighbors like Kuwait? (Hint: This is impossible.) So getting to Baghdad was never the problem; getting out of it without creating more trouble than we would have resolved was the hitch.

Alas, as should be all too clear, this is not a mere historical footnote. It is a problem still resonating a dozen years later, and none of the problems contemplated in 1991 have gotten easier. This, ultimately, is the best reason for not having marched to Baghdad, and that reason, conjoined to the belief of virtually every expert that Saddam would not survive six months after such a trouncing humiliation, explains why we did not then go there. It was a reasonable prediction, but it was wrong. These things happen.

It would not have been wrong, however, had the two mistakes to which President George H.W. Bush has pointed not been made. Had U.S. civilian authorities not ceded their decision-making power to General Norman Schwarzkopf, who let the Iraqis fly those helicopters, and had the United States simultaneously supported the Kurdish and Shia rebellions, Saddam would not have survived in power. So yes, mistakes were made; but, again, not the ones most often raised by the theorists of the immaculate conception persuasion.

Now, why were those mistakes made? One reason was disagreement near the top, and President Bush pere tried to have it both ways. He wanted Saddam gone, but he did not want to pay a price in American blood and entanglement if he could help it. He wanted Saddam gone, but not at the cost of provoking a crisis with Saudi Arabia or aiding the mullahs of Tehran. He split the differences among his advisors and hoped for the best, and this was by no means unreasonable. It happened not to have been successful, but in whose life does every wager pay off?

Had the president ordered the occupation of Baghdad in 1991, we would not have had to put up with Saddam for the past dozen years, and that would have been more than a small mercy. But for all anyone knows, American troops would have been there all along, for a dozen years, and who knows the larger consequences of that in the Arab and Muslim worlds — and of how those consequences might have redounded back on us? Who knows for certain that other, even more dangerous consequences than our having to live with Saddam these past 12 years would not have been set in motion? No one can possibly know this, which is why the popular condemnation of the Gulf War’s endgame often sounds far too cocksure against the available, but inevitably incomplete, evidence. It is perfectly true, as Paul Wolfowitz wrote back in 1994, that “by and large, wars are not constructive acts: they are better judged by what they prevent than by what they accomplish. But what a war has prevented is impossible ever to know with certainty, and for many observers is never a subject of serious reflection.” It is also true, however, that what wars prevent may sometimes end up benign while what they accomplish can evoke justifiable regret. It is, unfortunately, not an unreasonable fear that Gulf War ii will illustrate that very point to our considerable disappointment.

American presidents, who have to make the truly big decisions of U.S. foreign policy, must come to a judgment with incomplete information, often under stress and merciless time constraints, and frequently with their closest advisors painting one another in shades of disagreement. The choices are never between obviously good and obviously bad, but between greater and lesser sets of risks, greater and lesser prospects of danger. Banal as it sounds, we do well to remind ourselves from time to time that things really are not so simple, even when one’s basic principles are clear and correct. When President George W. Bush strove, from September 12, 2001 onward, to make the moral and strategic stakes of the war on terrorism clear, he was immediately enshrouded by an inescapable fog of irrepressible fact: namely, that our two most critical tactical allies in the war on terrorism, Pakistan and Saudi Arabia, were the two governments whose policies had led most directly to 9-11. If that was not enough ambiguity with which to start the war on terrorism, the various sideswipes of the Israeli-Palestinian conflict soon provided more.

Does this mean that George Bush, with his Bush Doctrine, is now evincing a form of the immaculate conception theory of U.S. foreign policy? Is the administration simplifying to our peril that which cannot, or at any rate ought not, be simplified? Not necessarily. An administration has a right to its own rhetoric, and with it a right to put some distance between that rhetoric and its actual conduct. As it is true, as Henry Kissinger has said, that covert action should not be confused with charity work, the art of public diplomacy is not and should not be tantamount to telling the truth, the whole truth, and nothing but the truth.

But the president does invite harm if he thinks himself free from having to make tradeoffs; if he thinks, for example, that by some sort of ex cathedra definition there can be no long-term political downside to a protracted U.S. occupation of Iraq. We do take risks that imperfect policy in the here and now will give rise to unknowable dangers in the there and then. As bad as Saddam and the Baath have been, they have not been Islamist in orientation. If we are not prepared to sit in Baghdad for half a century, who can guarantee that such a regime will not in due course follow the war? And if we do sit in Baghdad for a long time, who can guarantee that our doing so will not engender such tendencies in states nearby? And the president courts trouble if he somehow loses sight of the need to wend his way among advisors who do not always agree and underlings who do not always behave. Splitting the difference among differently minded advisors works, or at least doesn’t obviously fail, when incremental policy fits the task. When boldness is required, such splitting is liable to give rise to half-measures (and mis-measures) that only make things worse.

The president will also find no escape, even long after he leaves the White House, from the accusations of the immaculate conception school, whose students will not cease to pronounce the judgment of the sophomoric from now until thistles lose their barbs. One can only imagine what simplicities they will fabricate from the detritus of this war. Of this irritation, what can one say? These things happen.

Adam Garfinkle is a Senior Fellow at the Foreign Policy Research Institute.

Copyright © 2003 Policy Review

Serendipity

My Madison (WI) chum — Fightin' Tom Terrific — sent me a link for the Tom Tomorrow Blog/Web Site that led me to the Working For Change Web site. What did I encounter? Molly Ivins' followup to her analysis of the mess W and his Merry Men have created in Iraq. Molly promised a sequal in that column. Since the Amarillo fishwrap only runs Molly once each week (and the rightwing nuts howl at that), I am glad that I found her latest comments. Molly even cites a writer for the National Review as a source of information. If this be (fair & balanced) polemicism, make the most of it.



Just fix it
By
Molly Ivins

09.04.03 - AUSTIN, Texas -- It is insufficient to stand around saying, "I told you Iraq would be a disaster." Believe me, saying, "I told you so" is a satisfaction so sour it will gag you when people, including Americans, are dying every day.

I think our greatest strength is still pragmatism. OK, this isn't working, now what? In an effort to be constructive, even in the face of a developing catastrophe, I have been combing the public prints in an effort to find something positive to suggest.

There is a general consensus on both the left and right that we need to get more people over there, take control, and fix the lights and water, for starters. The more thoughtful advocates in the Do Something school, including Tom Friedman of The New York Times and David Ignatius of The Washington Post, favor a broader and more active coalition of international support, and the legitimacy that would provide. Kofi Annan, a classy guy, had the grace to say after the bombing of U.N. headquarters in Baghdad, "The pacification and stabilization of Iraq is so important that all of us who have the capacity to help should help."

Secretary of State Colin Powell is now asking France, Germany and Britain to back a resolution in the United Nations that would bring in more international help. Some of the usual black-helicopter nuts insist, "But we must still be in control." Since the whole problem is that we're not in control now, that seems like a silly point. Whatever, in terms of the command structure -- let's just get some U.N. troops over there. If it takes more American troops, I suggest we send more American troops, because letting Iraq degenerate into chaos isn't good for the Iraqis or us.

There seems to be general agreement on a second step, as well -- handing off power to the Iraqis themselves. I wince to report this is already being called "Iraqification." Trouble is, we seem to be setting about it backasswards, by creating a national Iraqi council of our hand-selected choices and now giving some authority to these cabinet-level types. Wouldn't it make more sense to start at the local level? Why can't the Iraqis hold mayoral elections and go from there? (I know, they tried to do it in Najaf in June, but Paul Bremer stepped in and cancelled the election -- another mistake.)

A mistake we can avoid is Ahmad Chalabai. Chalabai, head of the exile group the Iraqi National Congress and also a convicted swindler, was the neo-cons' darling before the war. He is the right-wing's oddest foreign enthusiasm since the time they took up that dingbat killer Jonas Savimbi in Angola. Chalabai is widely reported to be the source of much of the massively bad intelligence the administration relied on concerning weapons of mass destruction and other subjects. Apparently, no one in the administration had ever come across the common wisdom about not trusting exile groups. One would think that Chalabai's untrustworthiness would be clear to all by now, but there are still a few true believers.

Some in the "I'm trying to be constructive" camp are advocating the reconstitution of the Iraqi Army on the grounds that much of it did not fight for Saddam Hussein anyway. That seems to me a more problematic enterprise. The army was surely the most Baathist of all Hussein's institutions. Perhaps if one started with the privates and didn't go very far up, one could avoid the real Baathist thugs.

I found a useful idea buried in a National Review article by John O'Sullivan, after wading through many paragraphs of silly, tendentious left-bashing. Boy, does he not get why many of us opposed this war. Anyway, he presented an idea he said comes from Pamela Hess of UPI: a short-term public works program, paying young men $5 a day to rebuild infrastructure. "Given that the devil makes work for idle hands, that would be a security program as well as an economic program." Sounds smart to me. We're paying Halliburton $1.7 billion to go in and fix things, but private companies obviously don't want to send their people into an active war zone. Why not pay the Iraqis, instead?

With both liberals and conservatives now on the "For Lord's sake, fix it" side, the biggest impediment to actually doing something is the Pentagon's "Hey, no problem, everything's going according to plan" attitude. Donald Rumsfeld is starting to sound like Alfred E. ("What, me worry?") Neuman. The inability to admit error is a salient characteristic of this administration, but I'm not interested in apologies or mea culpas -- just get over there and fix it.

If worse comes to worst, we can always follow Sen. George Aiken's solution for Vietnam, "Declare victory and go home."

Copyright © 2003 Creators Syndicate




Why Do I Love Ben Sargent's Cartoons?

The only thing better would have been Governor Goodhair portrayed as a puppet being manipulated by Gruppenführer Tom DeLay. However, the cybergamer Elephant is OK. If this be (fair & balanced) sedition, so be it!



Lauren Hillenbrand Tells All

I saw the movie, then read the book, and Lauren Hillenbrand sold out. Gary Ross ("Seabiscuit" [the movie]) did NOT do justice to her book. I know that Universal paid her a LOT of money for the film rights, but the film did NOT come close to the book as an historical document. Seabiscuit is a great book. My chum — Tom Terrific in Madison, WI — proclaimed it to me a year ago, but sloth that I am, I just got to it in the late summer. Seabiscuit — in life and unlike the film portrayal — was a character. He slept a lot. He ate a lot. However, he ran like no other horse before or since. What a competitor! At one race track, the Biscuit's stall was in sight of the track. Seabiscuit had to be moved to another stall out of sight of the track because his trainer — Tom Smith — feared the horse would injure himself trying to get out of the stall to run with the other horses. Seabiscuit was arrogrant. Seabiscuit was good. It took special jockeys to ride Seabiscuit. John (Red) Pollard and George Woolfe were special jockeys. Charles Howard (the owner) was a special character, too. Read Seabiscuit if you haven't. It's never too late to read a great book. If this be (fair & balanced) huckstering, so be it.


[x NYTimes]
September 3, 2003
10 QUESTIONS FOR . . .
Laura Hillenbrand

The author of "Seabiscuit" answered readers' questions about the book turned movie, her obsession with horses and living with chronic fatigue syndrome.

Q. 1. Your book is so wonderfully written; it's completely absorbing. It enters my heart when you speak for the feelings of the horse, as if Seabiscuit is describing what he's going through. My question is why this subject? How long did the research take?

A. Thanks for the compliment. I stumbled on the subject accidentally. The basics of Seabiscuit's story have always been moderately well known among racing fans, but at the time that I came upon it, no one had ever explored the lives of the men who handled him. In 1996, I was going through some old racing material when I came across a few bits of information on Seabiscuit's jockey, owner and trainer. It struck me as fascinating that an automobile magnate who had devoted his life to making horses obsolete would find his greatest success managing a racehorse with a frontier horseman. I was intrigued enough to look a little deeper. I quickly realized that I had found an extraordinarily dramatic human story to go with the equine one. I spent the next four years researching it.

Q. 2. I am a historian. How did you move through to make this time period "live"? In other words, what helped you in bringing this particularly harsh period of time to life on paper?

A. I think the secret to bringing immediacy to any nonfiction story is to ferret out every detail that is there to be found, so that the reader feels like an eyewitness. To do this, I consulted a very broad range of sources, from record books to living witnesses, and everything in between. I studied every film and photograph that I could find, and acquired complete newspapers and magazines from the period and read them cover to cover so I could put myself in the mindset of the men and women of the era. I researched what things cost, what books and movies were popular, what the weather was on a particular day, anything that might help me stand in the shoes of an average American of the Depression era. I was very fortunate in that Seabiscuit was covered very heavily in the press and followed by millions of people, so there was a lot to be found.

Q. 3. Do you credit any works of "artful nonfiction" that had an important influence on the style of your telling the Seabiscuit story?

A. My goal as an historian is to make nonfiction read as smoothly as fiction while adhering very strictly to fact. I read a lot of nonfiction, and have certainly been influenced by such superb historians as Bruce Catton and David McCullough, but the writers who have had the greatest impact on me have been novelists. Michael Shaara's masterpiece "The Killer Angels," an historic novel about Gettysburg, has had a tremendous influence on my writing. Tolstoy has also been a wonderful teacher, namely "War and Peace" and "Anna Karenina." Other writers I read over and over again, and try to emulate, include Austen, Wharton, Fitzgerald and Hemingway.

Q. 4. I was born in 1938, so the book's description of that era made me feel as if I was learning something of the world in which I grew up but of which I remember virtually nothing. I was so enthralled by your book that I am hesitant to risk spoiling the images in my mind by going to see the film. Were you satisfied that the filmmakers did justice to your masterful work? What part of the movie differed most from your book? Which part of the book was left out that you would have liked to include?

A. In writing my book, I had the luxury of devoting as much space as I wished to the story. But movies must adhere to extremely strict, brief timeframes. Consequently, most nonfiction stories that become feature films end up severely truncated or so radically fictionalized that they are barely recognizable. I knew that Seabiscuit's story, with its perfect cinematic structure, dramatic and fast-paced scenes, and colorful characters and setting, was the ideal material for a film, but it was hard to envision how a screenwriter could adapt such a complicated tale without stripping it bare or losing the essence of the subjects.

When selling the film rights, I felt that my first responsibility was to ensure that my subjects' story wound up in the hands of a filmmaker who would be as true as possible to who they were. I knew that there was no way to avoid some fictionalization and streamlining, and I had no objection to that, so long as the movie was consistent with the true nature of its subjects and their era. After speaking to several directors and producers, I chose writer/director/producer Gary Ross.

I never had a moment's regret about that decision. Filmmakers are famous for weeding authors out of the creative process, but my experience with Gary was entirely different. Each time he needed to alter the story, create a composite character, or pursue a theme, he called me to see what I thought of it. It was immediately obvious that he was passionate about being faithful to the facts and the individuals involved. His passion showed in the final product, which I found enthralling. What struck me was not what was missing, but how deftly Gary managed to weave so much of the story into so short a time without it feeling compressed or rushed. Gary did an exceptional job, and I am immensely grateful to him.

Q. 5. Concerning the fact that Universal bought the rights way before you started writing did you manage to concentrate on the true facts and never think of the adaptation-to-be? In other words, did the deal influence the way you pictured the story in your mind?

A. I sold the movie rights two days after getting my book deal, without having written a single word of the book. Before I started getting calls from people proposing to make a movie of the story, I had never even considered what it would look like on film. In the 24 hours that I spent interviewing with those who wanted to make it into a movie, I gave a lot of thought to how I would like to see the story told, but once I chose Gary Ross, I mentally consigned the project to him and forgot about it. I was completely absorbed in the hunt for information and the process of writing, and trusted Gary to do a much better job than I ever could in adapting it to the screen.

Q. 6. How did you learn so much about the inside game of horse racing; e.g., the way jockeys acted, the manner of thoroughbreds, the brutality of the sport? Where you raised in horse country?

A. What I know about horses I have learned in a lifetime spent in their company. I grew up with a motley crew of horses on our family farm; my sister Susan and I would make bridles out of twine and ride the horses bareback around the cow pastures. As a small child, I was taken to Charles Town Racetrack in West Virginia, where I saw my first racehorse, a gray named Blue Barry. I was smitten. Each weekend, Susan and I would ride the Greyhound bus to the local Maryland racetracks, talking horses with the retirees who rode out with us. I never placed a bet. I'd just stand by the rail, watching. As a teenager, I read obsessively about racing, papering my bedroom walls with Andy Beyer columns from the Washington Post and collecting every book I could find on the subject. Prior to writing Seabiscuit, I wrote for Equus magazine, penning stories on equine medicine and behavior, and I learned a great deal there. I had enough of a background to feel comfortable writing about the sport, and learned a great deal more while researching the book.

Q. 7. In your book, how did you develop accurate descriptions for so many minute events? Did you ever have to guesstimate what probably happened, or were there always historical witnesses? As one example, there is a description of a conversation between a hospitalized Red Pollard and a hospital nurse that evidently occurred the day that Seabiscuit ran and lost in San Antonio, and I read it with some real curiosity as to who served as the source. Who was your source for this conversation? If the nurse was available to you, and actually described the conversation, then I would ask how you feel about the possibility that people may not remember the exact words that transpired 40 or 50 years before (I can barely remember what I said yesterday)? What is required for it to be an accurate history?

A. That's a great question. All the facts and quotations in my book, including the details, are drawn from solid sources, often multiple ones, and I never added any fictional or "guessed" information of any kind. I simply don't think you can invent anything and still call your book nonfiction. I was very fortunate in that there was so much information out there, enabling me to include a lot of detail and a surprising number of quotations. Reporters were present for many of the major events in this story, and they frequently recorded entire conversations or scenes in detail. In the example that you cite, the conversation was recorded as it happened by a prominent reporter who was in the hospital room, and he published it in his newspaper. In many cases, quotations and facts were recorded by multiple sources, so I could check one against the other. I kept a master list of journalists who regularly covered Seabiscuit and kept track of who was reliable and who was not.

In terms of living sources, you're right -- memory is a fallible thing, and I proceeded with caution in using information from such sources. When a living witness gave me information, I was often able to cross-check it against 1930’s sources. I was impressed with how accurately these people remembered events from so long ago. The vast majority of the quotations I used came from written sources from that era --newspapers, magazines, letters, telegrams -- or audio, not living witnesses. I used very few quotes for which a living person's memory was the only source, and then only when the quote was very brief, such as the "So long, Charley" that George Woolf yelled back to Charley Kurtsinger in the War Admiral match race.

Q. 8. I read your New Yorker article about your chronic fatigue illness. It was beautifully written and quite moving. Do you plan to do a book on the subject, or will you stick with horses? Can you tell us what your next subject might be? How does the success of "Seabiscuit" affect the way you approach writing your next book?

A. The New Yorker piece was the hardest thing I have ever written, both because I am struggling with vertigo, which makes reading and writing punishing, and because it’s very difficult for me to find words to express the devastation that my illness brought to my life. It took two years to write the article. Eventually I might want to write a book about my experience with this disease, but that is a daunting prospect. My career has given me a way to find a separate identity outside a disease that governs every detail of my life. I'm not sure I'm ready to focus my career on my illness, because I would lose that escape. I doubt that my next book will be related to horses either. One of the things that I love about my career is the ability it gives you to roam through many different subjects, and I'm ready to learn about something new. I will go where the stories are.

I don't think that the success of "Seabiscuit" will have an impact on me when I write my next book. People keep telling me that there will be a lot of pressure to have a similar success with my next book, but I don't feel that pressure. I didn't write "Seabiscuit" with the goal of having a big bestseller. I wrote it because I loved the story and wanted to live in it for a while. Had the book been a commercial failure, I'd still be happy, because writing it was such a joyful experience. As I approach my next book, my attitude is exactly the same. I am so fortunate to have a job that takes my mind to fascinating places. I don't need anything more than that. I will search for a story that engages me as Seabiscuit did, and I'll do my best to tell it well. The rest is out of my hands and I'm not going to waste time worrying about it.

Q. 9. I am astounded that you produced a book like this while so sick How were you able to conduct interviews and do the necessary research I imagine it entailed?

A. C.F.S. causes a host of symptoms, from fevers, chills and night sweats to cognitive problems and impaired immunity, but the symptoms that are most debilitating for me are exhaustion and vertigo. I had to find a way to work in spite of those symptoms.

To deal with the exhaustion, which renders me bed bound at times, I did everything I could to limit my energy expenditure to tasks related to my book. For the years in which I was writing, I did virtually nothing else. I put a refrigerator in my office, right next to my desk, so I could eat while I worked instead of walking downstairs. On some days, I'd lay on the floor, spread all my source materials around me and work there. Sometimes I'd lug my laptop to bed and write while lying down. There was no way for me to travel to distant libraries, so I used Interlibrary Loan services to arrange for books and newspapers to be sent to my local library from the Library of Congress and other libraries. I hired a former jockey to go to Kentucky's Keeneland Racecourse, which has a comprehensive racing library, and photocopy like crazy for two days.

My other major obstacle was vertigo, which causes my surroundings to look and feel like they are spinning or pitching up and down. The symptoms never go away, but reading and writing greatly exacerbates them, as does looking down. My boyfriend jerryrigged a device to hold source materials upright, so I could avoid looking down. I put my laptop on a stack of books, so it was at eye level. When the vertigo was very bad, I'd lie in bed and write on a pad with my eyes closed. It was punishing work. At the end of every day I was quite nauseated from the vertigo and exhaustion, and in the final weeks of writing I was so overworked that my hands shook, but somehow I got the book done.

Q. 10. We've heard about your struggle to write at times, under the overwhelming sway of chronic fatigue, whose symptoms of vertigo, profound exhaustion and pain demand a purposeful, disciplined schedule of rest. What have you won through this experience? What part of it, if any, has been too big a price to pay? How is the chronic fatigue now that the book is done and the movie's in theaters? What words of encouragement or advice would you give to others suffering from C.F.S. about setting goals and achieving dreams such as yours?

A. I did not take good care of myself as I wrote this book, and I am continuing to pay for it. The day after I turned in my manuscript, my health collapsed. My exhaustion became much more severe, and my vertigo returned in force, making it impossible to read more than a few lines a day. Three years later, it has relented slightly, but I am still severely limited in my ability to read and write. I write an occasional magazine article, but it takes weeks and leaves me miserably dizzy. I am still unable to read a book; I do all my "reading" via audiotape. Strength-wise, I am improving, but I am not as strong as I was before I began this project. Thanks to the success of the book and the movie, my schedule has been extremely exhausting, and I am hopeful that once my life calms down a bit, my health will rally.

It has not been a good three years health wise, but I'm not sure I would say that it was too big a price to pay. The book was a blissful escape for me, giving me the chance to walk around in the lives of three fascinating, vigorous men who lived a life of motion -- a life opposite to my own. And though I have sacrificed my health for this project, in a way I feel that the book has given me a way to triumph over my disease, because I was able to achieve something in spite of it. Finally, it has given me a platform from which to be an advocate for the 800,000 people in this country who suffer from chronic fatigue syndrome, a greatly misunderstood and very serious disease.

Copyright © 2003 The New York Times Company

W & Yasir: Soul Brothers?

W and Yasir have a LOT in common. Both have done more with less than any leader alive today. In fact, we ought to loan Karl Rove and Karen Hughes to Yasir Arafat for one year (no cash involved). There would be peace in the Middle East because the Palestinians wouldn't understand a damn thing that Yasir was saying. If this be (fair & balanced) treason, make the most of it.


[x CHE]
Yasir Arafat: Mystery Inside an Enigma
By BARRY RUBIN

Writing a biography of anyone is a challenging task, but narrating and analyzing Yasir Arafat's life is a particularly daunting one. Arafat has held the international spotlight for longer than almost any other politician on the planet. He has been a political activist for 55 years, head of his own organization for 44, leader of his people for 36, and head of a virtual Palestinian government for 10. He has achieved little material progress for his people, but, even in the twilight of his career, he has neither given up nor been pushed aside.

Despite that long and dramatic history, Arafat remains largely an unknown person. Everything about him is controversial, starting with the location of his birthplace. The most basic facts about his background, thoughts, and activities are disputed. Even the emotions he evokes are passionate and opposing. To make matters still more complex, he has always used highly secretive methods as the leader of what was -- and in many ways remains -- an underground organization.

Indeed, it has been Arafat's inability to transform himself from clandestine revolutionary, his preferred persona, to statesman on the world stage (or even pragmatic politician) that has been a key factor in his failure: He has succeeded at creating the world's longest-running revolutionary movement, the Palestine Liberation Organization, but has been unable to bring it to a successful conclusion.

Consider Arafat in a bunker in 2002 at his headquarters in the town of Ramallah, his provisional capital, as the Israeli army advanced. Once again, he was surrounded by the enemy, the sound of gunfire echoing in his ears, the world riveted by his every word. What could be more proper, fulfilling, glorious? No one could call him a sellout. And so, once more, he achieved that state of revolutionary nirvana. What others would have thought to be his most desperate moment seemed to satisfy him far more than negotiating peace or administering his near-state in the West Bank and Gaza Strip.

"The more destruction I see, the stronger I get," newspapers quoted him saying.

Length and continuity of observation are important here. I have been studying Arafat for more than 30 years, and such continuity makes a difference in making connections among themes and events, even in hearing the echo of specific statements and how they hark back to earlier situations. Over time, too, one can glimpse the ability of a political figure to change -- or to be paralyzed by the inability to do so.

Without that kind of long-term perspective on Arafat, it is much harder to understand his career. Many people have thought everything about him was obvious -- even if they could not agree on facts and interpretations. There have been journalistic biographies, some hagiographic, some apologetic. Those works are at least a decade old, written before key events and without access to material in British and American archives subsequently opened; more important, they lack the perspective that could be provided only near the end of Arafat's career.

Still, my own research project faced major barriers. Those who have not dealt with the Middle East may be unable to conceive of how difficult it is to establish even the simplest facts. A Palestinian journalist has written that she was asked, in passing, how many people lived in her hometown, Ramallah. She spent weeks on a quest for the answer, talking to a wide variety of officials, each of whom gave wildly differing figures. When my co-author, Judith Colp Rubin, and I were trying to put together a list of the members of leading PLO bodies, officials in the West Bank were unsure and had to call the PLO headquarters in Tunis to find the answers.

Part of the problem is the unavailability of archival sources. Government archives in the Middle East are simply not open, and there is no Freedom of Information Act in those countries. Still, Arafat's is one of the few contemporary careers that dates to an era when there are available Western archives containing materials about him. Those include U.S. and British Embassy reports, which are open through 1973. Such materials contain quite a bit of information about Arafat's early career, although they sometimes also show how little was known about him at the time. For example, the archives show, in detail, both London's and Washington's secret negotiations with Arafat in the 1970s, offering to help his movement if it did not attack civilians. The British government was even ready to stand aside and watch Arafat overthrow its old client, King Hussein of Jordan, in 1970.

Arafat's early rise was rapid. Growing up mostly in Cairo, he became a student activist in the 1940s. But, in 1957, unemployed and no longer welcome in a country where he had backed the losing, Islamist political side, he emigrated to Kuwait. By 1959, he had established his own group; in 1965, he began guerrilla attacks on Israel, with Syrian backing. Arafat aspired to be the Che Guevara of the Middle East. Two years later, having gained the patronage of Egypt, he became the PLO's leader. He has held that position ever since.

Arafat had learned how to court Arab rulers and to run his movement quite effectively. It took him a bit longer, but he also became adept at international public relations as well. As he became a more significant figure, he increasingly played to the mass media, and more of his sayings and doings became available. That coverage produces a major source about his life. I don't mean just The New York Times or The Washington Post, since more obscure publications, or those in the region, have much of the best material. One of the most interesting items we found on the 1970s, for example, was a March 1973 speech by the Sudanese president, Jaafar Numeri, published in a Kenyan newspaper, detailing Arafat's involvement in the assassination of the U.S. ambassador to his country, and in subverting a regime that had given the Palestinian leader much help.

What has struck us perhaps most is how rarely Arabic primary sources like newspapers have been used in Western attempts to write about Arafat. It's not just a question of language, because there are literally tons of available textual translations from the U.S. Foreign Broadcast Information Service, the British Survey of World Broadcasts, the Arabic news media itself, and PLO sources. Comparing Arafat's statements to Arab journalists and his discussions with Western writers is another way of teasing out the enigma. The contrast reveals a gap that is important in analyzing Arafat's real thinking compared with, to cite a phrase he often uses himself, what is "Blah, blah, blah." For example, frequently in the last few years, Arafat would make a statement in English condemning violence and calling for a cease-fire and then, a few hours later, give a speech in Arabic extolling suicide bombers. (At one point, in a secret meeting, he urged Palestinian groups to stop their attacks temporarily, because U.S. Secretary of State Colin L. Powell was visiting him.)

The same point applies to comparing the Western image of Arafat with that held by Arabs and Palestinians. That is only of value, though, if one can get behind the scenes and hear what people say in private. Thus much of our book is based on a large number of interviews -- many of them with veteran Arafat watchers and people who have worked closely with him, many off the record at the request of those involved. So often, at the end of our discussions, people would tell us that they still found Arafat a mystery. They looked forward to reading our finished book, hoping to understand.

In 1990, I set off to interview a top Palestinian leader in Tunis. As the taxi driver screeched to a stop at his home, a half-dozen guards pointed their Kalashnikovs at us. At first, the interview didn't seem to be yielding anything that might have been worth the risk. The official and his associates were simply giving out the current propaganda line, including the claim that Israel sought to conquer all of the Arab world from the Nile to the Euphrates. What could break through the verbiage? "Look, this is all nonsense. We both know it. So why don't we have a serious discussion." Talk moved on to real issues, and a friendship with one of Arafat's key colleagues began that lasts to this day.

The promise to keep interviews on background, not for attribution, was key: They would have been worthless otherwise. Officials of Arab states, and most Palestinians, at least until the last few months, would not publicly say anything negative about Arafat. Privately, however, they would go on at great length about their criticisms and offer candid accounts of events. A Lebanese leader ended a long talk about why Israel should make concessions to Arafat by saying, "Not that I'd ever trust him myself."

An Egyptian leader complained how he felt Arafat had misled him about what went on at the Camp David summit. And a Jordanian official recalled jokingly how, when suffering from a bad cold, he felt better after greeting and kissing Arafat -- in the hope of infecting him.

But background interviews raise another set of issues. There are always temptations to use material -- often the most sensational -- that one cannot be sure is accurate. To resist, it is necessary to corroborate all important pieces of information from several sources, a painstaking and slow process. Sometimes, in the end, that meant for us not using material that journalists might have seized upon as the most newsworthy items.

Once the second intifada began in 2000, we faced more difficulties. Trips that had once been made routinely became a matter for serious consideration in a time of violence. Not long after one of us had coffee with a longtime Fatah activist in his home, the man was shot in an internal feud. Hiring local researchers to do interviews was also difficult, since Palestinian journalists or students were often reluctant to ask the probing questions we had prepared, lest they face retaliation. We ended up using such surrogates on only one occasion.

As a result, some of our most useful interviews ended up being conducted in various Middle Eastern countries, Europe, or the United States. Once, during a rather unfruitful interview, one of us spotted a photograph on an office shelf. "Who was that other man standing next to Arafat?" A secret emissary to the PLO head. The man's phone number was quickly procured, and he was most forthcoming, on background of course.

Interviews were especially critical for our understanding of the important Camp David summit meetings of 2000. Talking to members of the Palestinian, Israeli, and American delegations, we found that, contrary to what readers might expect, there was an overwhelming unanimity about what had happened at the meeting, an event that could be called the ultimate test of Arafat's abilities and intentions. Israel put forward an independent Palestinian state, with its capital in East Jerusalem, and more than $20-billion in refugee compensation as its opening offer. Arafat rejected any negotiations. Later that year, President Clinton proposed another, even better deal, which Arafat also refused. Instead, he placed his faith in a new war that he claimed would bring unilateral Israeli concessions and international intervention in his favor. The Palestinians suffered high casualties, massive infrastructure damage, and military defeat.

Thus, some of the most interesting questions we sought to answer ended up revolving around the reasons for Arafat's failure to achieve a Palestinian state, or more victories along the way, and the paradox of his long survival despite that fact; his inability to break with self-defeating behavior patterns; and his sustained credibility in the face of so much evidence to that effect.

The answers lie partly in Arafat's great abilities as a survivor. Defeated in Jordan in 1970, he fled to Lebanon; beaten in Lebanon, he moved on to Tunis; at a dead end in Tunis, after his support for Saddam Hussein in 1991 led Saudi Arabia and Kuwait to cut off his funds, he leaped into a deal with Israel that allowed him to revive his fortunes.

Of equal significance, however, has been Arafat's marked inability to grow as a strategist or tactician over the years. By refusing to acknowledge defeats, he has failed to learn the lessons from them. Indeed, his career shows four nearly identical cycles, each ending in failure: His time in Jordan (1967-71), Lebanon (1971-82), Tunis (1982-94), and in the West Bank/Gaza Strip (1994-present). From each of those headquarters, he organized terror attacks on Israelis that brought neither Israel's surrender nor a liberated Palestinian state. At the same time, he disillusioned Western forces trying to help reach a compromise solution, while antagonizing his Arab allies and hosts. Sponsoring violence led him into military defeats -- at the hands of Jordan's army in 1970, and of Israeli and Syrian armies in 1981 and 1982, for example. Equally, Arafat's refusal to rein in radical Palestinian groups or to keep his commitments discredited him. Each time, his hosts demanded his departure.

Arafat's main strategy for achieving victory, one that he has openly acknowledged and for which he is rightly seen as one of history's great revolutionary innovators, has been terror. Believing that Israel was not a real state that could stand the test of attrition -- despite its strong army -- from the mid-1960s he began advocating direct attacks on its civilian population. In 1968, giving his first interview ever to the Western news media, he explained his rationale: "We are not trying to destroy the Israeli army, of course. But Israel is not just an army. It is a society that can only survive and prosper on peace and security. We aim to disrupt that society. Insecurity will make a mess of their agriculture and commerce. It will halt immigration and encourage emigration. We will even disrupt their tourist industry." Those words accurately describe his strategy today.

Thirty-five years later, despite all evidence to the contrary, he continues to portray terror asa brilliant strategy that will force Israel to surrender to his demands. Faced with what others -- including such leaders within his own movement as Abu Mazin, Abu Alaa, and Muhammad Dahlan -- thought tempting peace offers in 2000, for instance, he chose to launch a new war on Israel.

Within his organization, too, Arafat has long adapted an anti-institutional strategy that has had its advantages but has been equally destructive: a sort of loosely directed anarchy. In contrast to many successful nationalist movements, he has let each group and faction in his coalition maintain substantial independence. Not only have uncontrolled rivalries often disrupted diplomatic plans, but they have triggered a race to verbal and tactical militancy. Moderates have been marginalized or killed, and a discourse of compromise -- or even recognizing the limits of specific situations -- has been made to seem like treason.

Failing to make the all-important transition from revolutionary to politician, Arafat has been left wandering in the wilderness. His life shows us that, for him, the struggle has become an end in itself, whose sheer joy motivates him even when he is surrounded by enemy forces and has clearly suffered defeat. Even launching a losing war in 2000, and being twice besieged in his Ramallah office, did not seem a sufficiently pressing reason to change course.

That kind of thinking stems, in part, from his personality. He has been too much in love with revolution to end his own; too visionary to settle for banal practicality; too profound a believer in violence to see how counterproductive it so often is; too patient for his people's own good. He has consistently underestimated his enemy. Too confident of victory and too indifferent to the costs, Arafat has made the dangerous mistake of believing his own propaganda.

One key to understanding his worldview is to recognize that Arafat has never been a pragmatic nationalist, seeking a state quickly and focusing on the improvement of his people's material lot, but more of a mystic. Indeed, in contrast to his PLO colleagues, from his earliest years, when he was close to the Egyptian Muslim Brotherhood, he has had a strong streak of Islamist radical thinking. Only liberating the holy land and changing the course of Middle East history will suffice as a satisfactory goal for him.

Yet Arafat has been able to avoid paying for his errors, for a number of reasons. There was never a strong figure in his movement ready to challenge him personally; his great skill at internal politicking kept his lieutenants in line; he has been given the benefit of the doubt as the leader of a victimized people. As one of his veteran followers-turned-critic, Abbas Zaki told us, "You may argue with the man, but not question the fact that he's the supreme commander of the people." Arab states have tolerated him because they wanted to manipulate the Palestinian cause, while Western countries have indulged him in the belief that toleration would bring them benefits in the Arab world.

As a result of his experiences, Arafat came to the conclusion that his will could reshape everything else. Symbolically, he "changed" his birthplace from Cairo to Jerusalem at an early age and has continued to maintain that fiction despite the fact that journalists discovered his Egyptian birth certificate. Similarly, he claimed to have been a hero in the 1948 war though, as we found from an examination of his own statements and contemporary records, he failed to fight in the war at all, a shameful secret that may have been one factor in his obsessive need to prove his courage and to proclaim himself a general.

Of course, until now, there has always been the possibility that Arafat would make a sharp change in course, exhibit a newfound determination to make and keep a compromise peace. Such a shift seemed likely when the 1993 Oslo agreement was signed. Yet the ensuing decade has shown all too clearly that Arafat has not changed. Now, as he nears the end of his career, his historical fate has become more apparent.

Despite the appointment of Abu Mazin as prime minister earlier this year, Arafat still controls his movement, and he does everything possible to sabotage the man he considers a rival. He is unwilling to conclude peace and determined to prevent anyone else from doing so. Doubtless, he will continue to hold power, and block progress, for as long as he lives.

This, then is the epitaph for Arafat's career: He led his people far but on too long a journey, at too high a cost, and with an inability to bring them to a better life and the fulfillment of at least some of their aspirations.

Nevertheless, especially in his shortcomings, Arafat emerges as a unique figure, who must be explained if the history of this most enduring of contemporary conflicts is going to be understood.

Barry Rubin is director of the Global Research in International Affairs Center and editor of the Middle East Review of International Affairs. With Judith Colp Rubin, he is the author of Yasir Arafat: A Political Biography, published this month by Oxford University Press.

Copyright © 2003 by The Chronicle of Higher Education