The One Option Left

The recent round of gridlock in Washington DC may seem worlds away from the mythological visions and spiritual perspectives that have been central to this blog over the last few months. Still, there’s a direct connection. The forces that have driven American politicians into their current collection of blind alleys are also the forces that will make religious institutions very nearly the only structures capable of getting anything done in the difficult years to come, as industrial civilization accelerates along the time-honored trajectory of decline and fall.
 
To make sense of the connection, it’s necessary to start with certain facts, rarely mentioned in current media coverage, that put the last few weeks of government shutdown and potential Federal default in their proper context. These days the US government spends about twice as much each year as it takes in from taxes, user fees, and all other revenue sources, and makes up the difference by borrowing money.  Despite a great deal of handwaving, that’s a recipe for disaster.  If you, dear reader, earned US$50,000 a year and spent US$100,000 a year, and made up the difference by taking out loans and running up your credit cards, you could count on a few years of very comfortable living, followed by bankruptcy and a sharply reduced standard of living; the same rule applies on the level of nations.

Were you to pursue so dubious a project, in turn, one way to postpone the day of reckoning for a while would be to find some way to keep the interest rates you paid on your loans as low as possible. This is exactly what the US government has done in recent years. A variety of gimmicks, culminating in the current frenzy of “quantitative easing”—that is to say, printing money at a frantic pace—has forced interest rates down to historically low levels, in order to keep the federal government’s debt payments down to an annual sum that we can pretend to afford. Even a fairly modest increase in interest rates would be enough to push the US government into crisis; an increase on the same scale as those that have clobbered debt-burdened European countries in recent years would force an inevitable default.

Sooner or later, that latter is going to happen.  That’s the unmentioned context of the last few cycles of intractable financial stalemates in Washington. For more than a decade now, increasingly frantic attempts to kick the can further and further down the road have thus been the order of the day.  In order to prevent a steep economic contraction in the wake of the 2000 tech-stock crash, the US government and the Federal Reserve Board—its theoretically independent central bank—flooded the economy with cheap credit and turned a blind eye to what became the biggest speculative delusion in history, the global real estate bubble of 2004-2008.  When that popped, in turn, the US government and the Fed used even more drastic measures to stave off the normal consequences of a huge speculative bust.

None of those measures has a long shelf life. They’re all basically stopgaps, and it’s probably safe to assume that the people who decided to put them into place believed that before the last of the stopgaps stopped working, the US economy would resume its normal trajectory of growth and bail everyone out. That hasn’t happened, and there are good reasons to think that it’s not going to happen—not this year, not this decade, not in our lifetimes. We’ll get to those reasons shortly; the point that needs attention here is what this implies for the federal government here and now.

At some point in the years ahead, the US government is going to have to shut down at least half its current activities, in order to bring its expenditures back in line with its income. At some point in the years ahead, equally, the US government is going to have to default on its $8 trillion dollars of unpayable debt, plus however much more gets added as we proceed. The shutdown and default that have absorbed so much attention in recent weeks, in other words, define the shape of things to come.  This time, as I write these words, a temporary compromise seems to have been slapped together, but we’ll be back here again, and again, and again, until finally the shutdown becomes permanent, the default happens, and we move on into a harsh new economic reality.

It’s probably necessary at this point to remind my readers again that this doesn’t mean we will face the kind of imaginary full stop beloved by a certain class of apocalyptic theorists. Over the last twenty years or so, quite a few countries have slashed their government expenditures and defaulted on their debts. The results have included a great deal of turmoil and human suffering, but the overnight implosion of the global economy so often predicted has failed to occur, and for good reason. Glance back over economic history and you’ll find plenty of cases in which nations had to deal with crises of the same sort the US will soon face. All of them passed through hard times and massive changes, but none of them ceased to exist as organized societies; it’s only the widespread fixation on fantasies of apocalypse that leads some people to insist that for one reason or another, it’s different this time.

I plan on devoting several upcoming posts to what we can realistically expect when the US government has to slash its expenditures and default on its debts, the way that Russia, Argentina, and other nations have done in recent decades. For the moment, though, I want to focus on a different point: why has the US government backed itself into this mess? Yes, I’m aware of the theorists who argue that it’s all a part of some nefarious plan, but let’s please be real: to judge by previous examples, the political and financial leaders who’ve done this are going to have their careers terminated with extreme prejudice, and it’s by no means impossible that a significant number of them will end up dangling from lampposts. It’s safe to assume that the people who have made these decisions are aware of these possibilities. Why, then, their pursuit of the self-defeating policies just surveyed?

That pursuit makes sense only if the people responsible for the policies assumed they were temporary expedients, meant to keep business as usual afloat until a temporary crisis was over. From within the worldview of contemporary mainstream economics, it’s hard to see any other assumption they could have made. It’s axiomatic in today’s economic thought that economic growth is the normal state of affairs, and any interruption in growth is simply a temporary problem that will inevitably give way to renewed growth sooner or later. When an economic crisis happens, then, the first thought of political and financial leaders alike these days is to figure out how to keep business as usual running until the economy returns to its normal condition of growth.

The rising spiral of economic troubles around the world in the last decade or so, I suggest, has caught political and financial officials flatfooted, precisely because that “normal condition of growth” is no longer normal. After the tech-stock bubble imploded in 2000, central banks in the US and elsewhere forced down interest rates and flooded the global economy with a torrent of cheap credit. Under normal conditions, this would have driven an investment boom in productive capital of various kinds: new factories would have been built, new technologies brought to the market, and so on, resulting in a surge in employment, tax revenues, and so on. While a modest amount of productive capital did come out of the process, the primary result was a speculative bubble even more gargantuan than the tech boom.

That was a warning sign too few people heeded. Speculative bubbles are a routine reality in market economies, but under ordinary circumstances they’re self-limiting in scale, because there are so many other less risky places to earn a decent return on investment. It’s only when an economy has run out of other profitable investment opportunities that speculative bubbles grow to gargantuan size. In the late 1920s, the mismatch between vast investment in industrial capital and a wildly unbalanced distribution of income meant that American citizens could not afford to buy all the products of American industry, and this pushed the country into a classic overproduction crisis. Further investment in productive capital no longer brought in the expected rate of return, and so money flooded into speculative vehicles, driving the huge 1929 bubble and bust.

The parallel bubble-and-bust economy that we’ve seen since 2000 or so followed similar patterns on an even more extreme scale. Once again, income distribution in the United States got skewed drastically in favor of the well-to-do, so that a growing fraction of Americans could no longer support the consumer economy with their purchases. Once again, returns on productive investment sank to embarrassing lows, leaving speculative paper of various kinds as the only option in town. It wasn’t overproduction that made productive capital a waste of investment funds, though—it was something considerably more dangerous, and also less easy for political and financial elites to recognize.

The dogma that holds that growth is the normal state of economic affairs, after all, did not come about by accident. It was the result of three centuries of experience in the economies of Europe and the more successful nations of the European diaspora. Those three centuries, of course, happened to take place during the most colossal economic boom in all of recorded history. Two factors discussed in earlier posts drove that boom:  first, the global expansion of European empires in the 17th, 18th, and 19th centuries and the systematic looting of overseas colonies that resulted; second, the exploitation of half a billion years of stored sunlight in the form of coal, petroleum, and natural gas.

Both those driving forces remained in place through the twentieth century; the European empires gave way to a network of US client states that were plundered just as thoroughly as old-fashioned imperial colonies once were, while the exploitation of the world’s fossil fuel reserves went on at ever-increasing rates. The peaking of US petroleum production in 1972 threw a good-sized monkey wrench into the gears of the system and brought a decade of crisis, but a variety of short-term gimmicks postponed the crisis temporarily and opened the way to the final extravagant blowoff of the age of cheap energy.

The peaking of conventional petroleum production in 2005 marked the end of that era, and the coming of a new economic reality that no one in politics or business is yet prepared to grasp. Claims that the peak would be promptly followed by plunging production, mass panic, and apocalyptic social collapse proved to be just as inaccurate as such claims always are.  What happened instead was that a growing fraction of the world’s total energy supply has had to be diverted, directly or indirectly, to the task of maintaining fossil fuel production.  Not all that long ago, all things considered, a few thousand dollars was enough to drill an oil well that can still be producing hundreds of barrels a day decades later; these days, a fracked well in oil-bearing shale can cost $5 to 10 million to drill and hydrofracture, and three years down the road it’ll be yielding less than 10 barrels of oil a day.

These increased costs and diminished returns don’t take place in a vacuum. The energy and products of energy that have to be put into the task of maintaining energy production, after all, aren’t available for other economic uses.  In monetary terms—money, remember, is the system of tokens we use to keep track of the value and manage the distribution of goods and services—oil prices upwards of $100 a barrel, and comparable prices for petroleum products, provide some measure of the tax on all economic activity that’s being imposed by the diversion of energy, resources, and other goods and services into petroleum production. Meanwhile fewer businesses are hiring, less new productive capital gets built, new technologies languish on the shelves:  the traditional drivers of growth aren’t coming into play, because the surplus of real wealth needed to make them function isn’t there any more, having had to be diverted to keep drilling more and more short-lived wells in the Bakken Shale.

The broader pattern behind all these shifts is easy to state, though people raised in a growth economy often find it almost impossible to grasp. Sustained economic growth is made possible by sustained increases in the availability of energy and other resources for purposes other than their own production. The only reason economic growth seems normal to us is that we’ve just passed through an era three hundred years long in which, for the fraction of humanity living in western Europe, North America, and a few other corners of the world, the supply of energy and other resources soared well past any increases in the cost of production. That era is now over, and so is sustained economic growth.

The end of growth, though, has implications of its own, and some of these conflict sharply with expectations nurtured by the era of growth economics. It’s only when economic growth is normal, for example, that the average investment can be counted on to earn a profit. An investment is a microcosm of the whole economy; it’s because the total economy can be expected to gain value that investments, which represent ownership of a minute portion of the whole economy, can be expected to do the same thing. On paper, at least, investment in a growing economy is a positive-sum game; everyone can profit to one degree or another, and the goal of competition is to profit more than the other guy.

In a steady-state economy, by contrast, investment is a zero-sum game; since the economy neither grows nor contracts from year to year, the average investment breaks even, and for one investment to make a profit, another must suffer a loss. In a contracting economy, by the same logic, investment is a negative-sum game, the average investment loses money, and an investment that merely succeeds in breaking even can do so only if steeper losses are inflicted on other investments.

It’s precisely because the conditions for economic growth are over, and have been over for some time now, that the US political and financial establishment finds itself clinging to the ragged end of a bridge to nowhere, with an assortment of alligators gazing up hungrily from the waters below.  The stopgap policies that were meant to keep business as usual running until growth resumed have done their job, but economic growth has gone missing in action, and the supply of gimmicks is running very short. I don’t claim to know exactly when we’ll see the federal government default on its debt and begin mass layoffs and program cutbacks, but I see no way that these things can be avoided at this point.

Nor is this the only consequence of the end of growth.  In a contracting economy, again, the average investment loses money. That doesn’t simply apply to financial paper; if a business owner in such an economy invests in capital improvements, on average, those improvements will not bring a return sufficient to pay for the investment; if a bank makes a loan, on average, the loan will not be paid back in full, and so on. Every one of the mechanisms that a modern industrial economy uses to encourage people to direct surplus wealth back into the production of goods and services depends on the idea that investments normally make a profit. Lacking those, the process of economic contraction becomes self-reinforcing because disinvestment and hoarding becomes the best available strategy, the sole effective way to cling to as much as possible of your wealth for as long as possible.

This isn’t merely a theoretical possibility, by the way. It occurs reliably in the twilight years of other civilizations. The late Roman world is a case in point:  by the beginning of the fifth century CE, it was so hard for Roman businessmen to make money that the Roman government had laws requiring sons to go into their fathers’ professions, whether they could earn a living that way or not, and there were businessmen who fled across the borders and went to work as scribes, accountants, and translators for barbarian warlords, because the alternative was economic ruin in a collapsing Roman economy. Meanwhile rich landowners converted their available wealth into gold and silver and buried it, rather than cycling it back into the economy, and moneylending became so reliable a source of social ills that lending at interest was a mortal sin in medieval Christianity and remains so in Islam right down to the present. When Dante’s Inferno consigned people who lend money at interest to the lowest part of the seventh circle of Hell, several notches below mass murderers, heretics, and fallen angels, he was reflecting a common belief of his time, and one that had real justification in the not so distant past.

Left to itself, the negative-sum game of economics in a contracting economy has no necessary endpoint short of the complete collapse of all systems of economic exchange. In the real world, it rarely goes quite that far, though it can come uncomfortably close. In the aftermath of the Roman collapse, for example, it wasn’t just lending at interest that went away. Money itself dropped out of use in most of post-Roman Europe—as late as the twelfth century, it was normal for most people to go from one year to the next without ever handling a coin—and market-based economic exchange, which thrived in the Roman world, was replaced by feudal economies in which most goods were produced by those who consumed them, customary payments in kind took care of nearly all the rest, and a man could expect to hold land from his overlord on the same terms his great-grandfather had known.

All through the Long Descent that terminated the bustling centralized economy of the Roman world and replaced it with the decentralized feudal economies of post-Roman Europe, though, there was one reliable source of investment in necessary infrastructure and other social goods. It thrived when all other economic and political institutions failed, because it did not care in the least about the profit motive, and had different ways to motivate and direct human energies to constructive ends. It had precise equivalents in certain other dark age and medieval societies, too, and it’s worth noting that those dark ages that had some such institution in place were considerably less dark, and preserved a substantially larger fraction of the cultural and technological heritage of the previous society, than those in which no institution of the same kind existed.

In late Roman and post-Roman Europe, that institution was the Christian church. In other dark ages,  other religious organizations have filled similar roles—Buddhism, for example, in the dark ages that followed the collapse of Heian Japan, or the Egyptian priesthoods in the several dark ages experienced by ancient Egyptian society. When every other institution fails, in other words, religion is the one option left that provides a framework for organized collective activity. The revival of religion in the twilight of an age of rationalism, and its rise to a position of cultural predominance in the broader twilight of a civilization, thus has a potent economic rationale in addition to any other factors that may support it. How this works in practice will be central to a number of the posts to come.