Wednesday, February 02, 2011

The Bottom Line: There Has Never Been A War To End All Wars & There Never Has Been An Economic Theory To End All Economic Instability!

We are ignorant of two largely-unknown economists — Charles Kindleberger and Hyman Minsky — and that is why we live in Land O'The Stupid and the Home O'The Greedy. Leave it to the resident curmudgeon of the DC Fishwrap, Robert J. Samuelson, to rub our collective noses in our own mess. The Dow Jones Industrial Average has reached pre-Great Recession (2008) levels: 12,040.35 and the champagne corks are popping among the Masters of the Universe. If this is (fair & balanced) folly, so be it.

[x WQ]
Rethinking The Great Recession
By Robert J. Samuelson

Tag Cloud of the following article

created at TagCrowd.com

We Americans turn every major crisis into a morality tale in which the good guys and the bad guys are identified and praised or vilified accordingly. There’s a political, journalistic, and intellectual imperative to find out who caused the crisis, who can be blamed, and who can be indicted (either in legal courts or the court of public opinion) and, if found guilty, be jailed or publicly humbled. The great economic and financial crisis that began in 2007 has been no exception. It has stimulated an outpouring of books, articles, and studies that describe what happened: the making of the housing bubble, the explosion of complex mortgage-backed securities, the ethical and legal shortcuts used to justify dubious but profitable behavior. This extended inquest has produced a long list of possible villains: greedy mortgage brokers and investment bankers, inept government regulators, naive economists, self-serving politicians. What it hasn’t done is explain why all this happened.

The story has been all about crime and punishment when it should have been about boom and bust. The boom did not begin with the rise of home prices, as is usually asserted. It began instead with the suppression of double-digit inflation in the early 1980s, an event that unleashed a quarter-century of what seemed to be steady and dependable prosperity. There were only two recessions, both of them short and mild. Unemployment peaked at 7.8 percent. As inflation fell, interest rates followed. The stock market soared. From 1979 to 1999, stock values rose 14-fold. Housing prices climbed, though less spectacularly. Enriched, Americans borrowed and spent more. But what started as a justifiable response to good economic news—lower inflation—slowly evolved into corrupting overconfidence, the catalyst for the reckless borrowing, overspending, financial speculation, and regulatory lapses that caused the bust.

In some ways, the boom-bust story is both more innocent and more disturbing than the standard explanations of blundering and wrongdoing. It does not excuse the financial excesses, policy mistakes, economic miscalculations, deceits, and crimes that contributed to the collapse. But it does provide a broader explanation and a context. People were conditioned by a quarter-century of good economic times to believe that we had moved into a new era of reliable economic growth. Homeowners, investors, bankers, and economists all suspended disbelief. Their heady assumptions fostered a get-rich-quick climate in which wishful thinking, exploitation, and illegality flourished. People took shortcuts and thought they would get away with them. In this sense, the story is more understandable and innocent than the standard tale of calculated greed and dishonesty.

But the story is also more disturbing in that it batters our faith that modern economics—whether of the Left or Right—can protect us against great instability and insecurity. The financial panic and subsequent Great Recession have demonstrated that the advances in economic management and financial understanding that supposedly protected us from violent business cycles—ruling out another Great Depression—were oversold, exposing us to larger economic reversals than we thought possible. It’s true that we’ve so far avoided another depression, but it was a close call, and the fact that all the standard weapons (low interest rates, huge government budget deficits) have already been deployed leaves open the disquieting question of what would happen if the economic system again lurched violently into reverse. The economic theorems and tools that we thought could forewarn and protect us are more primitive than we imagined. We have not traveled so far from the panic-prone economies of 1857, 1893, and 1907 as we supposed.

Our experience since 2007 has also revealed a huge contradiction at the center of our politics. Prosperity is almost everyone’s goal, but too much prosperity enjoyed for too long tends to destroy itself. It seems that periodic recessions and burst bubbles—at least those of modest proportions—serve a social purpose by reminding people of economic and financial hazards and by rewarding prudence. Milder setbacks may avert less frequent but larger and more damaging convulsions—such as the one we’re now experiencing—that shake the country’s very political and social foundations. But hardly anyone wants to admit this publicly. What politician is going to campaign on the slogan, “More Recessions, Please”?

In a more honest telling of the story, avaricious Wall Street types, fumbling government regulators, and clueless economists become supporting players in a larger tragedy that is not mainly of their making. If you ask who did make it, the most honest answer is: We all did. Put differently, the widely shared quest for ever-improving prosperity contributed to the conditions that led to the financial and economic collapse. Our economic technocrats as well as our politicians and the general public constantly strive for expansions that last longer, unemployment that falls lower, economic growth that increases faster. Americans crave booms, which bring on busts. That is the unspoken contradiction.

Naturally, it’s unwelcome and unacknowledged. What we want to hear is that we were victimized and that, once the bad actors and practices are purged, we can resume the pursuit of uninterrupted and greater prosperity. So that’s what most crisis postmortems aim to do. They tell us who’s to blame and what we must accomplish to resume the quest for ever greater prosperity. Good policies will replace bad. To simplify only slightly, the theories of the crisis break into two camps—one from the Left, one from the Right.

From the Left, the explanation is greed, deregulation, misaligned pay incentives, and a mindless devotion to “free markets” and “efficient markets” theory. The result, it’s said, was an orgy of risk taking, unrestrained either by self-imposed prudence or sensible government oversight. Mortgage brokers and others relaxed lending standards for home mortgages because they were not holding them but passing them on to investment bankers, who packaged them in increasingly arcane securities, which were then bought by other investment entities (pension funds, hedge funds, foreign banks). These investors were in turn reassured because the securities had received high ratings from agencies such as Moody’s, Standard & Poor’s, and Fitch. All along the financial supply chain, people had incentives to minimize or ignore risks because the volume of loans, securitizations, or ratings determined their compensation. The more they ignored risk, the more they earned. The result was a mountain of bad debt that had to collapse, to the great peril of the entire financial system and the economy.

The Right’s critique blames the crisis mainly on government, which, it is alleged, encouraged risk taking in two ways. First, through a series of interventions in financial markets, it seemed to protect large investors against losses. Portfolio managers and lenders were conditioned to expect bailouts. Profits were privatized, it said, and losses socialized. In 1984, government bailed out Continental Illinois National Bank and Trust Company, then the nation’s seventh-largest bank. In the early 1990s, the Treasury rescued Mexico, thus protecting private creditors who had invested in short-term Mexican government securities. The protection continued with the bailout of the hedge fund Long-Term Capital Management in 1998. After the tech bubble burst in 2000, the Federal Reserve again rescued investors by lowering interest rates.

The second part of the Right’s argument is that government directly inflated the bubble by keeping interest rates too low (the Federal Reserve’s key rate fell to one percent in 2003) and subsidizing housing. In particular, Fannie Mae and Freddie Mac—government-created and -subsidized institutions—underwrote large parts of the mortgage market, including subprime mortgages.

We can test these theories of the crisis against the evidence. Note: Each aims to answer the same questions. Why did the system spin out of control? What caused the surge in borrowing by households and financial institutions? What led to the decline in lending standards and, as important, the misreading of risk, even by supposedly sophisticated players and observers?

Let’s start with the critique from the Left. The presumption is that with adequate regulation, problems would have been identified and corrected before they reached crisis proportions. Although this analysis seems plausible—and has been embraced by many journalists, economists, and politicians, and by much of the public—it rests on a wobbly factual foundation. For starters, many major players were regulated: Multiple agencies, including the Federal Reserve, supervised all the large bank-holding companies, including Citigroup, Bank of America, and Wachovia. Washington Mutual, a large mortgage lender that had to be rescued and was merged into JPMorgan Chase, was regulated by the Office of Thrift Supervision. Fannie and Freddie were regulated. To be sure, gaps existed; many mortgage brokers were on loose leashes. But there was enough oversight that alert regulators should have spotted problems and intervened to stop dubious lending.

The problem was not absent regulation; it was that the regulators were no smarter than the regulated. By and large, they didn’t anticipate the troubles that would afflict subprime mortgages or the devastating financial and economic ripple effects. The idea that regulators possess superior wisdom rests mainly on the myth that tough regulation in the 1970s and ’80s prevented major financial problems. History says otherwise. In the 1980s, more than 1,800 banks failed, including savings and loan associations. Their problems were not anticipated.

More important, many of the largest U.S. banks almost failed. They had lent billions of dollars to Mexico, Brazil, and other developing countries—loans that could not be repaid. If banks had been forced to recognize these losses immediately, much of the banking system would have been “nationalized,” writes William Isaac, who headed the Federal Deposit Insurance Corporation between 1981 and 1985, in his recent book Senseless Panic. Losses would have depleted banks’ reserves and capital. Instead, regulators temporized. They allowed bad loans to be refinanced until banks’ capital increased sufficiently to bear the losses. Still, regulators weren’t smart enough to prevent the loans from being made in the first place.

As for greed and dishonesty, their role in the crisis is exaggerated. Of course, greed was widespread on Wall Street and elsewhere. It always is. There was also much mistaken analysis about the worth of mortgages and the complex securities derived from them. But being wrong is not the same as being dishonest, and being greedy is not the same as being criminal. In general, banks and investment banks weren’t universally offloading mortgage securities known to be overvalued. Some of this happened; testimony before the Financial Crisis Inquiry Commission shows that some banks knew (or should have known) about the poor quality of mortgages. But many big financial institutions kept huge volumes of these securities. They, too, were duped—or duped themselves. That’s why there was a crisis. Merrill Lynch, Bear Stearns, and Wachovia, among others, belonged to this group.

If anything, the Right’s critique—Wall Street became incautious because government conditioned it to be incautious—is weaker. It’s the textbook “moral hazard” argument: If you protect people against the consequences of their bad behavior, you will incite bad behavior. But this explanation simply doesn’t fit the facts. Investors usually weren’t shielded from their mistakes, and even when they were, it was not possible to know in advance who would and wouldn’t be helped. In 1984, the shareholders of Continental Illinois weren’t protected; when the FDIC rescued the bank, it also acquired 80 percent of the company’s stock. When the Federal Reserve orchestrated a bailout of Long-Term Capital Management in 1998, most of the original shareholders lost the majority of their stake. After the bursting of the stock market bubble in 2000, most investors weren’t spared massive paper losses, even with Alan Greenspan’s easy money. From the market’s peak in early 2000 to its trough in October 2002, stock values dropped 50 percent, a wealth loss of about $8.5 trillion, according to the investment advisory firm Wilshire Associates.

Likewise, many investors weren’t protected in the current crisis. The share prices of most major financial institutions—even those that survived—declined dramatically. The stockholders of Bear Stearns and Lehman Brothers suffered massive losses, and their executives and employees were among the biggest losers. Fannie and Freddie’s shareholders met a similar fate. Institutions that were “too big to fail” did fail in a practical sense. It is true that, both before and after the present crisis, some creditors were shielded. Foreign lenders in the Mexican debt crisis of the early 1990s were protected, and most (though not all) lenders to major financial institutions were protected in the present crisis. But to repeat: The protections were not pervasive or predictable enough to inspire the sort of reckless risk taking that actually occurred.

As for interest rates, it is probably true that the very low rates adopted by Greenspan (the one percent rate on overnight loans lasted from June 2003 to June 2004, and even after that, rates remained low for several years) contributed to the speculative climate. Some investors did shift to riskier long-term bonds in an attempt to capture higher interest rates, and the additional demand likely reduced the return on these bonds somewhat. But a bigger effect on long-term rates, including mortgages, seems to have come from massive inflows of foreign money over which the Federal Reserve had no control. Moreover, the fact that housing booms also occurred in England, Spain, and Ireland, among other countries, seems to exonerate the Fed’s interest rates policies as the main cause of the housing bubble.

The central question about the crisis that must be answered is, Why was almost everyone fooled? “Almost everyone” includes most economists (starting with Fed chairmen Alan Greenspan and Ben Bernanke), most investors, most traders, most bankers, the rating agencies, most government regulators, most corporate executives, and most ordinary Americans. There were, of course, exceptions or partial exceptions. Warren Buffett warned against the dangers of financial derivatives—but did not anticipate the problem of mortgages. In The Big Short (2010), journalist Michael Lewis chronicled the tale of professional investors who were dismissed as oddballs and deviants when they correctly questioned the worth of subprime mortgages. Economist Nouriel Roubini foresaw the connections between fragile financial markets and the real economy, but his early pessimism was a minority view.

People are conditioned by their experiences. The most obvious explanation of why so many people did not see what was coming is that they’d lived through several decades of good economic times that made them optimistic. Prolonged prosperity seemed to signal that the economic world had become less risky. Of course, there were interruptions to prosperity. Indeed, for much of this period, Americans groused about the economy’s shortcomings. Incomes weren’t rising fast enough; there was too much inequality; unemployment was a shade too high. These were common complaints. Prosperity didn’t seem exceptional. It seemed flawed and imperfect.

That’s the point. Beneath the grumbling, people of all walks were coming to take a basic stability and state of well-being for granted. Though business cycles endured, the expectation was that recessions would be infrequent and mild. When large crises loomed, governments—mainly through their central banks, such as the Federal Reserve—seemed capable of preventing calamities. Economists generally concurred that the economy had entered a new era of relative calm. A whole generation of portfolio managers, investors, and financial strategists had profited from decades of exceptional returns on stocks and bonds. But what people didn’t realize then—and still don’t—is that almost all these favorable trends flowed in one way or another from the suppression of high inflation.

It’s hard to recall now, but three decades ago, inflation was the nation’s main economic problem. It had risen from negligible levels of about one percent in 1960 to about six percent at the end of the 1960s and to 12 to 14 percent in 1979 and 1980. Hardly anyone believed it could be controlled, although it was a source of deepening havoc, spurring four recessions since 1969, a stagnant stock market, and rising interest rates. And yet, the pessimists were proven wrong. A wrenching recession—deliberately engineered by then–Federal Reserve chairman Paul Volcker and supported by the newly elected Ronald Reagan—smothered inflationary psychology. It did so in a conventionally destructive way. Volcker tightened credit. Banks’ prime interest rates, the rates they charged on loans to their best customers, averaged 19 percent in 1981. There were gluts of jobless workers (unemployment reached 10.8 percent in late 1982), underutilized factories, and vacant stores and office buildings. But by 1984, inflation was down to four percent, and by 2000 it had gradually declined to the unthreatening levels of the early 1960s.

When Americans think of this inflation—if they think of it at all—they focus on inflation’s rise and ignore the consequences of its fall, disinflation. But these consequences were huge and mostly beneficial. The two recessions that occurred between 1982 and 2007—those of 1990–91 and 2001—each lasted only eight months. Over an entire quarter-century, the economy was in recession for a total of only 16 months, slightly more than a year. By contrast, the four recessions that struck between 1969 to 1982 lasted a total of 49 months, or about four years out of 13. Peak unemployment, 10.8 percent as noted, was much higher than in the following quarter-century, when it topped out at 7.8 percent. Economists called this subdued business cycle “the Great Moderation,” and wrote papers and organized conferences to explore it. But the basic explanation seemed evident: High and rising inflation was immensely destabilizing; low and falling inflation was not.

Declining inflation also stoked stock market and housing booms. By the end of 1979, the Standard & Poor’s 500 index had barely budged from its 1968 level; by year-end 1999, it had risen by a factor of 14. The rise in housing prices was less steep, though still impressive. In 1980, the median-priced existing home sold for $62,000; by 1999, the median price had climbed to $141,000. Declining interest rates propelled these increases. As inflation subsided—and as Americans realized that its decline was permanent—interest rates followed. From 1981 to 1999, interest rates on 10-year Treasury bonds fell from almost 14 percent to less than six percent. Lower rates boosted stocks, which became more attractive compared with bonds or money market funds. Greater economic stability helped by making future profits more certain. Lower interest rates increased housing prices by enabling buyers to pay more for homes.

Millions of Americans grew richer. From 1980 to 2000, households’ mutual funds and stocks rose in value from $1.1 trillion to $10.9 trillion. The 10-fold increase outpaced that of median income, which roughly doubled during the same period, reaching $42,000. Over the same years, households’ real estate wealth jumped from $2.9 trillion to $12.2 trillion. Feeling richer and less vulnerable to recessions, Americans borrowed more (often against their higher home values). This borrowing helped fuel a consumption boom that sustained economic expansion. Disinflation had, it seemed, triggered a virtuous circle of steady economic and wealth growth.

It was not just the real economy of production and jobs that seemed to have become more stable. Financial markets—stocks, bonds, foreign exchange, and securities of all sorts—also seemed calmer. Volatility, a measure of how much prices typically fluctuate, declined in the early 2000s. Sophisticated investors and traders understood this. Studies confirmed it.

Finally, government economic management seemed more skillful. The gravest threats to stability never materialized. In October 1987, the stock market dropped a frightening 20 percent in a single day, but that did not trigger a deep recession. Neither did the 1997–98 Asian financial crisis (when some countries defaulted on loans) or the bursting of the tech bubble in 2000. In each case, the Federal Reserve seemed to check the worst consequences. Faith in the Fed grew; Greenspan was dubbed the “maestro.”

Well, if the real economy and financial markets were more stable and the government more adept, then once risky private behaviors would be perceived as less hazardous. People could assume larger debts, because their job and repayment prospects were better and their personal wealth was steadily increasing. Lenders could liberalize credit standards, because borrowers were more reliable. Investors could adopt riskier strategies, because markets were less frenetic. In particular, they could add “leverage”—i.e., borrow more—which, on any given trade, might enhance profits.

So, paradoxically, the reduction of risk prompted Americans to take on more risk. From 1995 to 2007, household debt grew from 92 percent to 138 percent of disposable income. Bear Stearns, Lehman Brothers, and other financial institutions became heavily dependent on short-term loans that underpinned leverage ratios of 30 to 1 or more. (In effect, firms had $30 of loans for every $1 of shareholder capital.) Economists and government regulators became complacent and permissive. Optimism became self-fulfilling and self-reinforcing. Americans didn’t think they were behaving foolishly because so many people were doing the same thing. This—not deregulation or investor “moral hazard”—was the foundry in which the crisis was forged.

What now seems unwise could be rationalized then. Although households borrowed more, their wealth expanded so rapidly that their net worth—the difference between what they owned and what they owed—increased. Their financial positions looked stronger. From 1982 to 2004, households’ net worth jumped from $11 trillion to $53 trillion. Ascending home prices justified easier credit standards, because if (heaven forbid) borrowers defaulted, loans could be recouped from higher home values. Because the rating agencies adopted similarly favorable price assumptions, their models concluded that the risks of mortgage-backed securities were low. No less a figure than Greenspan himself dismissed the possibility of a nationwide housing collapse. People who sold a house usually had to buy another. They had to live somewhere. That process would sustain demand. “While local economies may experience significant speculative price imbalances,” he said in 2004, “a national severe price distortion seems most unlikely.”

As time passed, the whole system became more fragile and vulnerable. If the complex mortgage securities held by banks and others began to default—as they did—then the short-term loans that were used to finance the purchase of these securities would be curtailed or withdrawn, threatening the banks’ survival. Because no one knew precisely which banks held which securities (and, therefore, which banks were weakest), this process—once started—could cause a panic within the financial system. Banks, hedge funds, pensions, and corporations would retreat from trading and lending for fear that they might not be repaid. As banks and companies hoarded cash, production and jobs would decrease. Basically, that’s what happened. The initial reaction to disinflation, reflecting its real benefits, had disintegrated into overborrowing, speculation, and self-deception.

It’s worth noting that this explanation of the present crisis is neither widely held nor original. It vindicates Charles Kindleberger, the late economic historian who argued in his 1978 book Manias, Panics, and Crashes that financial crises occur in three stages. First comes “displacement”: a favorable development such as new technology, the end of a war, or a change in government that improves the economic outlook. Next is “euphoria”: the process by which a proportionate response to the original development becomes an artificial “bubble.” The last stage is “revulsion”: the recognition of excesses, which leads to panic and a collapse of speculative prices.

Beginning in the 1980s, the U.S. economy followed exactly this pattern. The decline of double-digit inflation was the original “displacement.” The ensuing prolonged prosperity spawned “euphoria,” which culminated in the “revulsion” and panic of 2008. But Kindleberger’s views—which built on those of the economist Hyman Minsky—have never commanded center stage among academic economists. Though widely read and respected, Kindleberger was always something of a renegade. He expressed skepticism and even contempt for the mathematical models and theoretical constructs that have defined mainstream macroeconomics for decades, while paying great attention to historical conditions and events.

If this explanation of the crisis is correct, it raises momentous questions. Since World War II, American democracy has been largely premised on its ability to create ever greater economic benefits—higher living standards, more social protections, greater job and income security—for most of its citizens. The promise has largely succeeded and, in turn, rests heavily on the belief, shared unconsciously by leaders in both parties, that we retain basic control over the economy. Until recently, the consensus among economists was that another Great Depression was unthinkable. We could prevent it. As for recessions, we might not be able to eliminate them entirely, but we could regulate them and minimize the damage. Economic knowledge and management had progressed. These comforting assumptions now hang in doubt.

The great delusion of the boom was that we mistook the one-time benefits of disinflation for a permanent advance in the art of economic stabilization. We did so because it fulfilled our political wish. Ironically, the impulse to improve economic performance degraded economic performance. This happened once before, in the 1960s and ’70s, when academic economists—among them Walter Heller of the University of Minnesota, James Tobin of Yale, and Robert Solow of MIT—sold political leaders on an ambitious agenda. Despite widespread post–World War II prosperity, there had been recessions every three or four years. Invoking John Maynard Keynes, the economists said they could—by manipulating budget deficits and interest rates—smooth business cycles and maintain “full employment” (then defined as four percent unemployment) most of the time. They couldn’t, and the effort to do so created the inflation that crippled the economy for 15 years.

We still haven’t forsaken the hope for perfected prosperity. After the recent crisis, both liberals and conservatives offered therapeutic visions. Liberals promoted expanded regulation to curb Wall Street’s excesses. Conservatives wanted a less activist government that would let markets perform their disciplining functions. Both may achieve some goals. Liberals have already engineered greater regulation. Banks will be required to hold more capital as a cushion against losses. The new financial reform legislation would allow government to shut large failing financial institutions, such as Lehman Brothers, without resorting to disruptive bankruptcy. Conservatives may take solace from fewer bailouts. They are so unpopular that investors must know that the chances of getting one have diminished. Together, these changes may make the financial system safer.

The trouble is that, like generals fighting the last war, we may be fighting the last economic crisis. Future threats to stability may originate elsewhere. One danger spot is globalization. Economies are intertwined in ways that are only crudely understood. Supply chains are global. Vast sums of money routinely cross borders and shift among currencies. Countries are mutually dependent and mutually vulnerable through many channels: Supplies of oil and other essential raw materials may be curtailed; cyberattacks could cripple vital computer networks; manipulated exchange rates might disrupt trade and investment flows. Economic activity has grown more international, while decision making remains largely with nation-states. Although the global economy has remained basically stable since World War II, there is really no good theory as to why it should stay so—and there are some signs (currency tensions, for instance) that it may not.

Overcommitted welfare states pose another threat. Most affluent nations face similar problems: High budget deficits and government debts may portend a loss of investor confidence, but the deficits and debts have been driven higher by massive social spending—on pensions, health care, unemployment insurance, education—that people have come to expect. Economics and politics are colliding. If the debt and deficits aren’t controlled, will investors someday desert bond markets, jolting interest rates upward and triggering a new financial crisis? But if many countries try to control deficits simultaneously, might a tidal wave of spending cuts and tax increases cause a global depression? (The United States, Europe, and Japan still constitute about half the world’s economy.) These are all good questions without good answers. The underlying problem is that economic change seems to have outrun economic understanding and control.

It’s widely believed that the financial panic and Great Recession constitute a watershed for global capitalism, which has been (it’s said) permanently discredited. Around the world, the political pendulum is swinging from unfettered competition toward more government oversight. Markets have been deemed incorrigibly erratic. Greed must be contained, and the greedy must be taxed. These ideas reflect a real shift in thinking, but in time that may not be seen as the main consequence of the economic collapse. These ideas imply that capitalism was unsupervised and untaxed before. Of course, this is not true. Businesses everywhere, big and small, were and are regulated and taxed. Future changes are likely to be those of degree, in part because countervailing forces, mobile capital being the most obvious, will impose limits. Countries that oppressively regulate or tax are likely to see businesses go elsewhere.

What looms as the most significant legacy of the crisis is a loss of economic control. Keynes famously remarked that “practical men” are “usually the slaves of some defunct economist.” By this he meant that politics and public opinion are often governed by what economists (living and dead, actually) define as desirable and doable. In the years after World War II, the prevailing assumption among economists, embraced by much of the public, was that we had conquered the classic problem of booms and busts. Grave economic crises afflicted only developing countries or developed countries that had grossly mismanaged their affairs. This common view is no longer tenable. It has been refuted by events.

Our economic knowledge and tools came up short. Either they were overwhelmed by change or their power was always exaggerated. This does not mean that economic growth will cease. Chances are that the United States and the other prosperous nations of the developed world will, over time, get wealthier as a result of technological changes that are now barely glimpsed. But the widespread faith—and the sense of security it imparted—that economic management would forever spare us devastating disruptions has been shattered. Just as there has never been a war to end all wars, there has yet to be an economic theory that can end all serious instability. Ω

[Robert J. Samuelson, a columnist for Newsweek and The Washington Post, is the author, most recently, of The Great Inflation and Its Aftermath: The Past and Future of American Affluence (2008). Samuelson received his bachelor's degree in 1967 from Harvard University, where he majored in government.]

Copyright © 2011 The Woodrow Wilson International Center for Scholars

Get the Google Reader at no cost from Google. Click on this link to go on a tour of the Google Reader. If you read a lot of blogs, load Reader with your regular sites, then check them all on one page. The Reader's share function lets you publicize your favorite posts.

Creative Commons License
Sapper's (Fair & Balanced) Rants & Raves by Neil Sapper is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License. Based on a work at sapper.blogspot.com. Permissions beyond the scope of this license may be available here.



Copyright © 2011 Sapper's (Fair & Balanced) Rants & Raves