I’ve commented several times in these essays about the way
that Americans in particular, and people throughout the industrial world more
generally, like to pretend that history has nothing to teach them. It’s a
remarkably odd habit, not least because the lessons of history keep whacking
them upside the head with an assortment of well-aged and sturdy timbers,
without ever breaking through the trance.
My favorite example, not least because I’ve profited
personally by it, is the way that the lessons taught by speculative bubbles
never seem to make it out of our collective equivalent of short-term memory.
It happens that in my adult life, I’ve
had a ringside seat at four big speculative frenzies, and it happens also that
the first of them, the runup to the 1987 US stock market crash, got started
right about the time I first read John Kenneth Galbraith’s mordantly funny
history The Great Crash 1929. I then got to watch a stock
market bubble indistinguishable from the 1929 example unfold in front of my
eyes, complete with the usual twaddle about new economic eras and limitless
upside possibilities. It was quite a learning experience, though I didn’t have
any money of my own in the market.
A decade after the 1987 crash, the same twaddle got deployed
a second time as tech stocks began their ascent to ionospheric heights. At the
time I was living in Seattle, one of the epicenters of the tech stock mania,
and I got approached more times than I can easily remember by friends who worked
in the computer industry, and who wanted to give me a chance to cash in on the
new economic era and its limitless upside possibilities. I declined and, when
pressed, explained my reasons with reference to Galbraith, 1929, and the 1987
crash. The standard response was condescending pity, a lecture about how I
obviously didn’t know the first thing about tech stocks, and enthusiastic
praise of books such as the wildly popular and wildly delusional Dow 36,000.
Shortly thereafter, the market crashed, and my friends’ intimate knowledge of
tech stocks didn’t keep them from losing their shirts.
Fast forward to 2004, and the same twaddle was deployed
again. This time the investment du jour was real estate, and once again I was
approached by any number of friends who wanted to help me cash in on the new
economic era and its limitless upside possibilities. Once again I declined and,
when pressed, explained my reasons with reference to Galbraith, 1929, the 1987
crash, and the tech stock bubble and bust. The usual response? You guessed
it—condescending pity, a lecture about how I obviously didn’t know the first
thing about real estate, and enthusiastic praise of books such as David
Lereah’s epically mistimed Why The Real Estate Boom Will Not
Bust. I was confident enough this time that my wife and I stayed out
of the real estate bubble, waited for prices to plummet, and bought the home
where we now live for an absurdly small amount of money. Meanwhile the people I
knew who planned on becoming real estate millionaires got clobbered when the
bottom fell out of the market
Tune into the media these days—not just the mainstream
media, but the alternative media as well—and you’ll hear that same twaddle
clustering around several different asset classes. Among those that have risen parabolically in
recent years, and have now seen steep declines, the same rhetoric being hawked
by David Lereah and the authors of Dow 36,000 is all over
the place: don’t worry about those falling prices, they’re the result of some
temporary factor or other, they don’t reflect the fundamentals, and so on.
You’ll find that same rhetoric chronicled by Galbraith, too, among the
promoters and victims of the 1929 crash; it appears like clockwork as soon as a
speculative bubble begins to lose momentum, and increases in volume as the
bottom drops out.
Try to tell the people who are about to get crushed by the
current round of bubbles that that’s what’s happening, though, and you’ll get
the same condescending pity and the same lecture about how you obviously don’t
know the first thing about whatever asset is involved this time around. No matter how precise the parallels, they’ll
insist that the painful lessons taught by every previous speculative bubble in
history are irrelevant to their investment strategy this time around, and
they’ll keep on saying that even when your predictions turn out to be correct
and theirs end up costing them their shorts. What’s more, a decade from now, if
they start talking about how they’re about to get rich by investing in thorium
mining stocks or what have you, and you point out that they’re doing exactly
the same thing that cost them their shorts the last time around, you’ll get
exactly the same response.
There are any number of factors feeding into this weird and
self-defeating blindness to the most painful lessons of recent financial
history. To begin with, of course, there’s the widening mismatch between the
American dream of endlessly improving economic opportunity and the American
reality of steadily declining standards of living for everyone outside a
narrowing circle of the well-to-do. Our national mythology makes it impossible
for most Americans to conceive of a future of accelerating contraction and impoverishment,
and so any excuse to believe that happy days are here again will attract an
instant and uncritical audience. Consider the extraordinary fog of
misinformation surrounding the current fracking bubble—the increasingly loud
and frantic claims that the modest temporary gains in oil production driven by
the fracking phenomenon guarantee a future of abundant energy and prosperity
for all. It’s the same twaddle about a
new era with limitless upside potential, but it’s even more popular than usual,
because the alternative is facing the future that’s taking shape around us.
There are plenty of other forces pushing in the same
direction, to be sure. One of them is particularly relevant to the theme of the
current series of posts here on The Archdruid Report. It’s
the general neglect of a style of thinking that is of central importance in
modern science, but remains curiously unpopular in American culture. At the risk of scaring readers away by using
a long word, I’ll give it its proper name: morphological thinking.
Morphology is the study of form. Applied to biology, it was
the driving force behind the intellectual revolution we nowadays associate with
Charles Darwin, but was under way long before his time. Johann Wolfgang von Goethe showed in 1784
that the the bones of the human skull are modified vertebrae that still retain
their original relationship to one another. In 1790 he extended the same logic to plants, showing that all aboveground parts of a plant are modifications of a primitive leaf structure. Two generations of scholars
built on Goethe’s work to show that every living creature has deep structural
similarities with other life forms, living and extinct: the bones of a cat’s foreleg, a dolphin’s
flipper, and a bat’s wing all have the same structure, and a close study of all three makes it impossible
not to see the ancient mammalian forelimb that, over millions of years of deep
time, evolved into each of them. Darwin’s achievement was simply that of providing a
convincing explanation for the changes that earlier biologists had already
sketched out.
The major source of opposition to all these claims was the unwillingness to apply the same morphological principles to human beings. Goethe's researches into the skull, like Darwin's studies of natural selection, both ran into heated challenges from those who were unwilling to see themselves included in the same category as other animals: to notice, for example, that the same bone patterns found in the bat's wing, the porpoise's flipper, and the cat's foreleg are also present in your hand. Even so, the morphological approach triumphed, because even the opponents of evolutionary theory ended up using it. Georges Cuvier, a famous biologist of the generation before Darwin, was a fierce opponent of theories of evolution; he was still able to take a few bones from an extinct creature, sketch out what the rest of the animal would have looked like—and get it right.
The major source of opposition to all these claims was the unwillingness to apply the same morphological principles to human beings. Goethe's researches into the skull, like Darwin's studies of natural selection, both ran into heated challenges from those who were unwilling to see themselves included in the same category as other animals: to notice, for example, that the same bone patterns found in the bat's wing, the porpoise's flipper, and the cat's foreleg are also present in your hand. Even so, the morphological approach triumphed, because even the opponents of evolutionary theory ended up using it. Georges Cuvier, a famous biologist of the generation before Darwin, was a fierce opponent of theories of evolution; he was still able to take a few bones from an extinct creature, sketch out what the rest of the animal would have looked like—and get it right.
Morphology is especially useful in fields of
study where it’s impossible to know the causes of change. Evolutionary biology
is a great example; we don’t have the opportunity to go back into the thinning
forests of East Africa five or six million years ago, scatter instruments
across the landscape, and figure out exactly why it was that several kinds of
primates came down from the trees and took up a life on the open savannah
around that time. What we have are the
morphological traces of that descent, and of the different adaptations that
enabled those early primates to survive—the long legs of the patas monkey, the
hefty muscles and sharp teeth of the baboons, your upright posture, and so on.
From that, we can figure out quite a bit about what happened to each of those
primate lineages, even in the absence of videotapes from the Pliocene.
Science has its fads and fashions, just like everything else
human, and morphology has accordingly gone in and out of style as an analytic
tool at various points down through the years. The same rule applies to other
fields of scholarship where morphology can be used. History’s among the classic
examples. There’s a long tradition of morphological thinking in history,
because the causes of historical change are generally hidden from scholars by
the lapse of time and the sheer complexity of the past. Giambattista Vico,
Oswald Spengler, and Arnold Toynbee are among the most important historians who
put civilizations side by side in order to trace out common patterns of rise
and fall. These days, that approach has fallen out of fashion, and other
analytic tools get much more of a workout in historical scholarship, but the
method remains useful in making sense of the past and, in certain situations,
of the future as well.
That’s the secret, or one of the secrets, of morphological
thinking. If you’ve learned to recognize
the shape of a common sequence of events, and you see the first stages of that
sequence get under way, you can predict the outcome of the sequence, and be
right far more often than not. That’s what I was doing, though I didn’t yet
know the formal name for it, when I considered the tech stock bubble, compared
it to the stock market bubble of the mid-1980s and to previous examples of the
same species, and predicted that it would end in a messy crash and a wave of
bankruptcies—as of course it did.
That’s also what I was doing in the early days of this blog,
with a little better grasp of the underlying theory, when I compared the
confident rhetoric of contemporary American life to the harsh realities of
overshoot, and predicted that the price of oil would climb and the American
economy stumble down a ragged curve of contraction papered over by statistical
gimmicks and jerry-rigged financial engineering—as of course it has. That’s
also, in a different sense, what I’m doing in the current sequence of posts, in
which I’m placing today’s popular faith in the inevitability and beneficence of
progress side by side with other civil religions and, more broadly, with
theistic religions as well.
Over the weeks just past, as I’ve begun to make that
comparison here, I’ve fielded quite a few comments insisting that the
comparison itself is inadmissible. Some
of those comments assume that calling the modern faith in the inevitability and
beneficence of progress a civil religion must amount to a criticism of that
faith, or perhaps a debater’s tactic meant to justify claiming various unsavory
things about it. It interests me that those who made these comments apparently
didn’t consider the possibility that a religious person, the head of a
religious organization and the author of quite a number of books about
religious subjects—all of which I am—might not use the word “religion” as a
putdown.
Another share of those comments comes from people who
apparently either didn’t read or didn’t absorb the paragraphs at the beginning
of my two latest posts explaining that religion is not a specific, concrete
thing, but rather an abstract category into which a diverse assortment of human
beliefs, practices and institutions can reasonably be fitted. These are the
comments that insist that faith in progress can’t be a religion because
religions by definition believe in things that can’t be proved to exist, or
what have you. Now of course it’s
worthwhile to ask where such definitions come from, and how well they actually
fit the facts on the ground, but there’s another point at issue here.
Human beliefs, practices and institutions rarely come into
existence with the words “this is a religion” stamped on them. People whose
cultures that have the category “religion” among their higher-order
abstractions are generally the ones who make that judgment call. All the
judgment call means, in turn, is that in the eyes of the people making it, the
things gathered together under the label “religion” have enough in common that
it makes sense to talk about them as members of a common category.
When we talk about individual religions—Christianity,
Druidry, faith in progress, or what have you—we’re still talking about abstract
categories, though they’re abstractions of a lower order, reflecting
constellations of beliefs, practices, institutions, and the like that can be
observed together in specific instances in the real world: this person in this
building praying to this deity in words drawn from this scripture, for
example. Those specific instances are
the concrete realities that make the abstractions useful tools for
understanding. To borrow a useful quote from the philosopher José
Ortega y Gasset: “The abstract is no more than an instrument, an organ, to see
the concrete clearly.” Now of course it might be claimed that a given
abstraction can’t be used to see some specific concrete reality clearly, but
it’s hardly reasonable to do so in advance of making the experiment.
One interesting wrinkle on this last point comes from a
commenter who insists, quoting a scholar of religious studies from India, that
the concept “religion” is purely a modern Western notion and can’t be used
outside that context. Since I’m discussing faith in progress as a civil
religion in the modern Western world, it’s hard to see how this criticism
applies, but there’s a deeper issue as well. It so happens that a noticeable
minority of the world’s languages have no word for the color that, in English,
we call “orange.” Does that mean that speakers of those languages don’t
perceive light in the relevant wavelengths? Of course not; they simply use
different words to divide up the color spectrum.
In the same way, some of the world’s languages and cultures
don’t find the higher-order abstraction “religion” useful. The phenomena
assigned to the category “religion” in English still exist in those languages
and cultures—you’ll find, for example, that good clear translations of words
such as “deity,” “worship,” “temple,” “prayer,” “offering,” “scripture,” and
the like can be found in a great many languages that have no word for
“religion” as such. Since we’re having this discussion in English, in turn, and
talking about a pattern in contemporary American society, it’s not unreasonable
to use the resources of the English language to provide useful categories, and
the word “religion” is one of those.
Finally, there are the comments that assume that anyone who
doubts that progress can continue indefinitely must hate progress and long for
a return to primitive squalor, or what have you. I get comments of this sort
regularly, and so do other writers and bloggers who ask the sort of questions I
do. For so popular a notion, it’s
remarkably weird. It’s as though someone
were to claim that anyone who notices the chill in the air and the first yellow
leaves on a September morning, and recognizes that autumn is on its way, must
hate summer, or that the person who pounds on your door at two in the morning
shouting “Your house is on fire!” wants you to burn to death.
Too much of the talk about progress in recent decades, it
seems to me, has focused obsessively on labeling it good or bad, and stopped
there. That sort of simplistic discussion doesn’t interest me. What does
interest me is the relation between the three centuries of drastic social and
technological change immediately behind us, and the
impact of those three centuries on the shape of the future immediately
ahead. The widespread faith in progress
that shapes so much of the cultural mainstream in most modern industrial
nations is a crucial part of that relationship; while it retains its present
role in public life, it has a great deal to say about which ideas and projects
are acceptable and which are not; if it
implodes, as civil religions very often do under certain predictable circumstances,
that implosion will have massive consequences for politics, culture, and the
shape of the future.
Thus I’d like to ask my readers to bear with me in the weeks
and months ahead, whether or not the description of faith in progress as a civil
religion makes any obvious sense to you, and try to see the modern faith in
progress through the abstract category of religion, using the same sort of
morphological thinking I’ve discussed above.
I grant freely that a porpoise doesn’t look much like a bat, and neither one has much resemblance to you. If you put the bones of the porpoise’s
flipper next to the bones of the bat’s wing, and then compare them with the bones of your hand, it becomes
possible to learn things that are much harder to understand in any other context. In the same way, if we put the contemporary faith in progress side by side with the belief systems that have defined the basic presuppositions of meaning and value for other societies, certain patterns become clear—and those patterns bid fair to be of immense importance in the years ahead of us.