It can be hard to remember these days that not much more
than half a century ago, philosophy was something you read about in
general-interest magazines and the better grade of newspapers. Existentialist
philosopher Jean-Paul Sartre was an international celebrity; the posthumous
publication of Pierre Teilhard de Chardin’s Le Phenomenon Humaine (the
English translation, predictably, was titled The Phenomenon of Man) got
significant flurries of media coverage; Random House’s Vintage Books label
brought out cheap mass-market paperback editions of major philosophical
writings from Plato straight through to Nietzsche and beyond, and made money
off them.
Though philosophy was never really part of the cultural
mainstream, it had the same kind of following as avant-garde jazz, say, or
science fiction. At any reasonably large
cocktail party you had a pretty fair chance of meeting someone who was into it,
and if you knew where to look in any big city—or any college town with pretensions
to intellectual culture, for that matter—you could find at least one bar or
bookstore or all-night coffee joint where the philosophy geeks hung out, and
talked earnestly into the small hours about Kant or Kierkegaard. What’s more,
that level of interest in the subject had been pretty standard in the Western
world for a very long time.
We’ve come a long way since then, and not in a particularly useful
direction. These days, if you hear somebody talk about philosophy in the media,
it’s probably a scientific materialist like Neil deGrasse Tyson ranting about
how all philosophy is nonsense. The occasional work of philosophical exegesis
still gets a page or two in the New York Review of Books now and then,
but popular interest in the subject has vanished, and more than vanished: the
sort of truculent ignorance about philosophy displayed by Tyson and his many
equivalents has become just as common among the chattering classes as a feigned
interest in the subject was a half century in the past.
Like most human events, the decline of philosophy in modern
times was overdetermined; like the victim in the murder-mystery paperback who
was shot, strangled, stabbed, poisoned, whacked over the head with a lead pipe,
and then shoved off a bridge to drown, there were more causes of death than the
situation actually required. Part of the problem, certainly, was the explosive
expansion of the academic industry in the US and elsewhere in the second half
of the twentieth century. In an era when
every state teacher’s college aspired to become a university and every state
university dreamed of rivaling the Ivy League, a philosophy department was an
essential status symbol. The resulting expansion of the field was not
necessarily matched by an equivalent increase in genuine philosophers, but it
was certainly followed by the transformation of university-employed philosophy
professors into a professional caste which, as such castes generally do,
defended its status by adopting an impenetrable jargon and ignoring or
rebuffing attempts at participation from outside its increasingly airtight
circle.
Another factor was the rise of the sort of belligerent
scientific materialism exemplified, as noted earlier, by Neil deGrasse Tyson.
Scientific inquiry itself is philosophically neutral—it’s possible to practice
science from just about any philosophical standpoint you care to name—but the
claim at the heart of scientific materialism, the dogmatic insistence that
those things that can be investigated using scientific methods and explained by
current scientific theory are the only things that can possibly exist, depends
on arbitrary metaphysical postulates that were comprehensively disproved by
philosophers more than two centuries ago. (We’ll get to those postulates and
their problems later on.) Thus the ascendancy of scientific materialism in
educated culture pretty much mandated the dismissal of philosophy.
There were plenty of other factors as well, most of them
having no more to do with philosophy as such than the ones just cited.
Philosophy itself, though, bears some of the responsibility for its own
decline. Starting in the seventeenth century and reaching a crisis point in the
nineteenth, western philosophy came to a parting of the ways—one that the
philosophical traditions of other cultures reached long before it, with similar
consequences—and by and large, philosophers and their audiences alike chose a
route that led to its present eclipse. That choice isn’t irreparable, and
there’s much to be gained by reversing it, but it’s going to take a fair amount
of hard intellectual effort and a willingness to abandon some highly popular
shibboleths to work back to the mistake that was made, and undo it.
To help make sense of what follows, a concrete metaphor
might be useful. If you’re in a place where there are windows nearby,
especially if the windows aren’t particularly clean, go look out through a
window at the view beyond it. Then, after you’ve done this for a minute or so,
change your focus and look at the window rather than through it,
so that you see the slight color of the glass and whatever dust or dirt is
clinging to it. Repeat the process a few times, until you’re clear on the shift
I mean: looking through the window, you see the world; looking at the
window, you see the medium through which you see the world—and you might just
discover that some of what you thought at first glance was out there in the
world was actually on the window glass the whole time.
That, in effect, was the great change that shook western
philosophy to its foundations beginning in the seventeenth century. Up to that
point, most philosophers in the western world started from a set of unexamined
presuppositions about what was true, and used the tools of reasoning and
evidence to proceed from those presuppositions to a more or less complete
account of the world. They were into what philosophers call metaphysics:
reasoned inquiry into the basic principles of existence. That’s the focus of
every philosophical tradition in its early years, before the confusing results
of metaphysical inquiry refocus attention from “What exists?” to “How do we
know what exists?” Metaphysics then gives way to epistemology: reasoned
inquiry into what human beings are capable of knowing.
That refocusing happened in Greek philosophy around the
fourth century BCE, in Indian philosophy around the tenth century BCE, and in
Chinese philosophy a little earlier than in Greece. In each case, philosophers
who had been busy constructing elegant explanations of the world on the basis
of some set of unexamined cultural assumptions found themselves face to face
with hard questions about the validity of those assumptions. In terms of the
metaphor suggested above, they were making all kinds of statements about what
they saw through the window, and then suddenly realized that the colors they’d
attributed to the world were being contributed in part by the window glass and
the dust on it, the vast dark shape that seemed to be moving purposefully
across the sky was actually a beetle walking on the outside of the window, and
so on.
The same refocusing began in the modern world with Rene Descartes, who famously attempted to start his philosophical explorations by doubting everything. That’s a good deal easier said than done, as it happens, and to a modern eye, Descartes’ writings are riddled with unexamined assumptions, but the first attempt had been made and others followed. A trio of epistemologists from the British Isles—John Locke, George Berkeley, and David Hume—rushed in where Descartes feared to tread, demonstrating that the view from the window had much more to do with the window glass than it did with the world outside. The final step in the process was taken by the German philosopher Immanuel Kant, who subjected human sensory and rational knowledge to relentless scrutiny and showed that most of what we think of as “out there,” including such apparently hard realities as space and time, are actually artifacts of the processes by which we perceive things.
Look at an object nearby: a coffee cup, let’s say. You
experience the cup as something solid and real, outside yourself: seeing it,
you know you can reach for it and pick it up; and to the extent that you notice
the processes by which you perceive it, you experience these as wholly passive,
a transparent window on an objective external reality. That’s normal, and there
are good practical reasons why we usually experience the world that way, but
it’s not actually what’s going on.
What’s going on is that a thin stream of visual information
is flowing into your mind in the form of brief fragmentary glimpses of color
and shape. Your mind then assembles these together into the mental image of the
coffee cup, using your memories of that and other coffee cups, and a range of
other things as well, as a template onto which the glimpses can be arranged.
Arthur Schopenhauer, about whom we’ll be talking a great deal as we proceed,
gave the process we’re discussing the useful label of “representation;” when you look at
the coffee cup, you’re not passively seeing the cup as it exists, you’re
actively representing—literally re-presenting—an image of the cup in your mind.
There are certain special situations in which you can watch
representation at work. If you’ve ever woken up in an unfamiliar room at night,
and had a few seconds pass before the dark unknown shapes around you finally
turned into ordinary furniture, you’ve had one of those experiences. Another is
provided by the kind of optical illusion that can be seen as two different
things. With a little practice, you can flip from one way of seeing the
illusion to another, and watch the process of representation as it happens.
What makes the realization just described so challenging is
that it’s fairly easy to prove that the cup as we represent it has very little
in common with the cup as it exists “out there.” You can prove this by means of
science: the cup “out there,” according to the evidence collected painstakingly
by physicists, consists of an intricate matrix of quantum probability fields
and ripples in space-time, which our senses systematically misperceive as a
solid object with a certain color, surface texture, and so on. You can also
prove this, as it happens, by sheer sustained introspection—that’s how Indian
philosophers got there in the age of the Upanishads—and you can prove it just
as well by a sufficiently rigorous logical analysis of the basis of human
knowledge, which is what Kant did.
The difficulty here, of course, is that once you’ve figured
this out, you’ve basically scuttled any chance at pursuing the kind of
metaphysics that’s traditional in the formative period of your philosophical
tradition. Kant got this, which is why he titled the most relentless of his
analyses Prolegomena to Any Future Metaphysics; what he meant by this
was that anybody who wanted to try to talk about what actually exists had
better be prepared to answer some extremely difficult questions first. When philosophical traditions hit their
epistemological crises, accordingly, some philosophers accept the hard limits
on human knowledge, ditch the metaphysics, and look for something more useful
to do—a quest that typically leads to ethics, mysticism, or both. Other
philosophers double down on the metaphysics and either try to find some way
around the epistemological barrier, or simply ignore it, and this latter option
is the one that most Western philosophers after Kant ended up choosing. Where that leads—well, we’ll get to that
later on.
For the moment, I want to focus a little more closely on the
epistemological crisis itself, because there are certain very common ways to
misunderstand it. One of them I remember with a certain amount of discomfort,
because I made it myself in my first published book, Paths of Wisdom.
This is the sort of argument that sees the sensory organs and the nervous
system as the reason for the gap between the reality out there—the “thing in
itself” (Ding an Sich), as Kant called it—and the representation as we
experience it. It’s superficially very convincing: the eye receives light in
certain patterns and turns those into a cascade of electrochemical bursts
running up the optic nerve, and the visual centers in the brain then fold,
spindle, and mutilate the results into the image we see.
The difficulty? When we look at light, an eye, an optic
nerve, a brain, we’re not seeing things in themselves, we’re seeing another set
of representations, constructed just as arbitrarily in our minds as any other
representation. Nietzsche had fun with this one: “What? and others even go so
far as to say that the external world is the work of our organs? But then our
body, as a piece of this external world, would be the work of our organs! But
then our organs themselves would be—the work of our organs!” That is to say,
the body is also a representation—or, more precisely, the body as we perceive
it is a representation. It has another aspect, but we’ll get to that in a
future post.
Another common misunderstanding of the epistemological
crisis is to think that it’s saying that your conscious mind assembles the
world, and can do so in whatever way it wishes. Not so. Look at the coffee cup
again. Can you, by any act of consciousness, make that coffee cup suddenly
sprout wings and fly chirping around your computer desk? Of course not. (Those
who disagree should be prepared to show their work.) The crucial point here is
that representation is neither a conscious activity nor an arbitrary one. Much
of it seems to be hardwired, and most of the rest is learned very early in
life—each of us spent our first few years learning how to do it, and scientists
such as Jean Piaget have chronicled in detail the processes by which children
gradually learn how to assemble the world into the specific meaningful shape
their culture expects them to get.
By the time you’re an adult, you do that instantly, with no
more conscious effort than you’re using right now to extract meaning from the
little squiggles on your computer screen we call “letters.” Much of the
learning process, in turn, involves finding meaningful correlations between the
bits of sensory data and weaving those into your representations—thus you’ve
learned that when you get the bits of visual data that normally assemble into a
coffee cup, you can reach for it and get the bits of tactile data that normally
assemble into the feeling of picking up the cup, followed by certain sensations
of movement, followed by certain sensations of taste, temperature, etc.
corresponding to drinking the coffee.
That’s why Kant included the “thing in itself” in his
account: there really does seem to be something out there that gives rise to
the data we assemble into our representations. It’s just that the window we’re
looking through might as well be a funhouse mirror: it imposes so much of itself on the data that
trickles through it that it’s almost impossible to draw firm conclusions about
what’s “out there” from our representations.
The most we can do, most of the time, is to see what representations do
the best job of allowing us to predict what the next series of fragmentary
sensory images will include. That’s what science does, when its practitioners
are honest with themselves about its limitations—and it’s possible to do
perfectly good science on that basis, by the way.
It’s possible to do quite a lot intellectually on that
basis, in fact. From the golden age of ancient Greece straight through to the
end of the Renaissance, in fact, a field of scholarship that’s almost
completely forgotten today—topics—was an important part of a general education,
the kind of thing you studied as a matter of course once you got past grammar
school. Topics is the study of those things that can’t be proved logically, but
are broadly accepted as more or less true, and so can be used as “places” (in
Greek, topoi) on which you can ground a line of argument. The most
important of these are the commonplaces (literally, the common places or topoi)
that we all use all the time as a basis for our thinking and speaking; in
modern terms, we can think of them as “things on which a general consensus
exists.” They aren’t truths; they’re useful approximations of truths, things
that have been found to work most of the time, things to be set aside only if
you have good reason to do so.
Science could have been seen as a way to expand the range of
useful topoi. That’s what a scientific experiment does, after all: it answers
the question, “If I do this, what happens?” As the results of experiments add
up, you end up with a consensus—usually an approximate consensus, because it’s
all but unheard of for repetitions of any experiment to get exactly the same
result every time, but a consensus nonetheless—that’s accepted by the
scientific community as a useful approximation of the truth, and can be set
aside only if you have good reason to do so. To a significant extent, that’s
the way science is actually practiced—well, when it hasn’t been hopelessly
corrupted for economic or political gain—but that’s not the social role that
science has come to fill in modern industrial society.
I’ve written here several times already about the trap into
which institutional science has backed itself in recent decades, with the
enthusiastic assistance of the belligerent scientific materialists mentioned
earlier in this post. Public figures in the scientific community routinely like
to insist that the current consensus among scientists on any topic must be
accepted by the lay public without question, even when scientific opinion has
swung around like a weathercock in living memory, and even when unpleasantly
detailed evidence of the deliberate falsification of scientific data is
tolerably easy to find, especially but not only in the medical and
pharmaceutical fields. That insistence isn’t wearing well; nor does it help
when scientific materialists insist—as they very often do—that something can’t exist
or something else can’t happen, simply because current theory doesn’t happen to provide a
mechanism for it.
Too obsessive a fixation on that claim to authority, and the
political and financial baggage that comes with it, could very possibly result
in the widespread rejection of science across the industrial world in the
decades ahead. That’s not yet set in stone, and it’s still possible that
scientists who aren’t too deeply enmeshed in the existing order of things could
provide a balancing voice, and help see to it that a less doctrinaire
understanding of science gets a voice and a public presence.
Doing that, though, would require an attitude we might as
well call epistemic modesty: the recognition that the human capacity to
know has hard limits, and the unqualified absolute truth about most things is
out of our reach. Socrates was called the wisest of the Greeks because he
accepted the need for epistemic modesty, and recognized that he didn’t actually
know much of anything for certain. That recognition didn’t keep him from being
able to get up in the morning and go to work at his day job as a stonecutter,
and it needn’t keep the rest of us from doing what we have to do as industrial
civilization lurches down the trajectory toward a difficult future.