How Should We Then Live?

The philosophy of Arthur Schopenhauer, which we’ve been discussing for several weeks now, isn’t usually approached from the angle by which I’ve been approaching it—that is, as a way to talk about the gap between what we think we know about the world and what we actually know about it. The aspect of his work that usually gets all the publicity is the ethical dimension.

That’s understandable but it’s also unfortunate, because the ethical dimension of Schopenhauer’s philosophy is far and away the weakest part of it. It’s not going too far to say that once he started talking about ethics, Schopenhauer slipped on a banana peel dropped in his path by his own presuppositions, and fell flat on his nose. The banana peel in question is all the more embarrassing in that he spent much of the first half of The World as Will and Representation showing that you can’t make a certain kind of statement without spouting nonsense, and then turned around and based much of the second half on exactly that kind of statement.

Let’s review the basic elements of Schopenhauer’s thinking. First, the only things we can experience are our own representations. There’s probably a real world out there—certainly that hypothesis explains the consistency of our representations with one another, and with those reported by (representations of) other people, with less handwaving than any other theory—but all the data we get from the world out there amounts to a thin trickle of sensory data, which we then assemble into representations of things using a set of prefab templates provided partly by our species’ evolutionary history and partly by habits we picked up in early childhood. How much those representations have to do with what’s actually out there is a really good question that’s probably insoluble in principle.

Second, if we pay attention to our experience, we encounter one thing that isn’t a representation—the will. You don’t experience the will, you encounter its effects, but everything you experience is given its framing and context by the will. Is it “your” will?  The thing you call “yourself” is a representation like any other; explore it using any of at least three toolkits—sustained introspection, logical analysis, and scientific experimentation—and you’ll find that what’s underneath the representation of a single self that chooses and wills is a bundle of blind forces, divergent and usually poorly coordinated, that get in each other’s way, interfere with each other’s actions, and produce the jumbled and self-defeating mess that by and large passes for ordinary human behavior.

Third, the point just made is difficult for us to accept because our culture prefers to think of the universe as consisting of mind and matter—more precisely, active, superior, personal mind and passive, inferior, impersonal matter. Schopenhauer pokes at both of these concepts and finds them wanting. What we call mind, from his perspective, is simply one of the more complex and less robust grades of will—it’s what happens when the will gets sufficiently tangled and bashed about that it picks up the habit of representing a world to itself, so that it can use that as a map to avoid the more obvious sources of pain. Matter is a phantom—an arbitrarily defined “stuff” we use to pretend that our representations really do exist out there in reality.

Fourth, since the only things we encounter when we examine the world are representations, on the one hand, and will in its various modes on the other, we really don’t have any justification for claiming that anything else actually exists. Maybe there are all kinds of other things out there in the cosmos, but if all we actually encounter are will and representations, and a description of the cosmos as representation and will makes sense of everything we meet with in the course of life, why pile up unnecessary hypotheses just because our cultural habits of thought beg for them?

Thus the world Schopenhauer presents to us is the world we encounter—provided that we do in fact pay attention to what we encounter, rather than insisting that our representations are realities and our culturally engrained habits of thought are more real than the things they’re supposed to explain. The difficulty, of course, is that imagining a universe of mind and matter allows us to pretend that our representations are objective realities and that thoughts about things are more real than the things themselves—and both of these dodges are essential to the claim, hammered into the cultural bedrock of contemporary industrial society, that we and we alone know the pure unvarnished truth about things.

From Schopenhauer’s perspective, that’s exactly what none of us can know. We can at best figure out that when this representation appears, that representation will usually follow, and work out formal models—we call these scientific theories—that allow us to predict, more or less, the sequence of representations that appear in certain contexts. We can’t even do that much reliably when things get complex enough; at that point we have to ditch the formal models and just go with narrative patterns, the way I’ve tried to do in discussing the ways that civilizations decline and fall.

Notice that this implies that the more general a statement is, the further removed it is from that thin trickle of sensory data on which the whole world of representations is based, and the more strictly subjective it is. That means, in turn, that any value judgment applied to existence as a whole must be utterly subjective, an expression of the point of view of the person making that judgment, rather than any kind of objective statement about existence itself.

There’s the banana peel on which Schopenhauer slipped, because having set up the vision of existence I’ve just described, he turned around and insisted that existence is objectively awful and the only valid response to it for anyone, anywhere, is to learn to nullify the will to live and, in due time, cease to be.

Is that one possible subjective response to the world in which we find ourselves? Of course, and some people seem to find it satisfying. Mind you, the number of them that actually go out of their way to cease existing is rather noticeably smaller than the number who find such notions pleasing in the abstract. Schopenhauer himself is a helpful example. Having insisted in print that all pleasure is simply a prelude to misery and an ascetic lifestyle ending in extinction is the only meaningful way to live, he proceeded to live to a ripe old age, indulging his taste for fine dining, music, theater, and the more than occasional harlot. I’m not sure how you’d translate “do what I say, not what I do” into classical Greek, but it would have made an appropriate epigraph for The World as Will and Representation.

Now of course a failure to walk one’s talk is far from rare among intellectuals, especially those of ascetic leanings, and the contrast between Schopenhauer’s ideals and his actions doesn’t disprove the value of the more strictly epistemological part of his work. It does, however, point up an obvious contradiction in his thinking. Accept the basic assumptions of his philosophy, after all, and it follows that the value judgments we apply to the representations we encounter are just as much a product of our own minds as the representations themselves; they’re not objective qualities of the things we judge, even though we’re used to treating them that way.

We treat them that way, in turn, because for the last two millennia or so it’s been standard for prophetic religious traditions to treat them that way. By “prophetic religious traditions” I mean those that were founded by individual persons—Gautama the Buddha, Jesus of Nazareth, Muhammad, and so on—or were reshaped in the image of such faiths, the way Judaism was reshaped in the image of the Zoroastrian religion after the Babylonian captivity. (As Raphael Patai pointed out in quite some detail a while back in his book The Hebrew Goddess, Judaism wasn’t monotheistic until the Jews picked up that habit from their Zoroastrian Persian liberators; quite a few other traits of post-Exilic Judaism, such as extensive dietary taboos, also have straightforward Zoroastrian origins.)

A range of contrasts separate the prophetic religions from the older polytheist folk religions that they supplanted over most of the world, but one of the crucial points of difference is in value judgments concerning human behavior—or, as we tend to call them these days, moral judgments. The gods and goddesses of folk religions are by and large no more moral, or interested in morality, than the forces of nature they command and represent; some expect human beings to maintain certain specific customs—Zeus, for example, was held by the ancient Greeks to punish those who violated traditional rules of hospitality—but that was about it. The deities central to most prophetic religions, by contrast, are all about moral judgment.

The scale of the shift can be measured easily enough from the words “morals” and “ethics” themselves. It’s become popular of late to try to make each of these mean something different, but the only actual difference between them is that “morals” comes from Latin and “ethics” comes from Greek. Back in classical times, though, they had a shared meaning that isn’t the one given to them today. The Latin word moralia derives from mores, the Greek word ethike derives from ethoi, and mores and ethoi both mean “customs” or “habits,” without the language of judgment associated with the modern words.

To grasp something of the difference, it’s enough to pick up a copy of Aristotle’s Nicomachean Ethics, by common consent the most important work of what we’d now call moral philosophy that came out of the ancient world. It’s not ethics or morals in any modern sense of the word; it’s a manual on how to achieve personal greatness, and it manages to discuss most of the territory now covered by ethics without ever stooping to the kind of moral denunciation that pervades ethical thought in our time.

Exactly why religion and morality got so thoroughly conflated in the prophetic religions is an interesting historical question, and one that deserves more space than a fraction of one blog post can provide. The point I want to address here is the very difficult fit between the sharp limits on human knowledge and the sweeping presuppositions of moral knowledge that modern societies have inherited from the age of prophetic religions. If we don’t actually know anything but our representations, and can draw only tentative conclusions from them, do we really know enough to make sweeping generalizations about good and evil?

The prophetic religions themselves actually have a workable response to that challenge. Most of them freely admit that human beings don’t have the capacity to judge rightly between good and evil without help, and go on to argue that this is why everyone needs to follow the rules set down in scripture as interpreted by the religious specialists of their creed. Grant the claim that their scriptures were actually handed down from a superhumanly wise source, and it logically follows that obeying the moral rules included in the scriptures is a reasonable action. It’s the basic claim, of course, that’s generally the sticking point; since every prophetic religion has roughly the same evidence backing its claim to divine inspiration as every other, and their scriptures all contradict one another over important moral issues, it’s not exactly easy to draw straightforward conclusions from them.

Their predicament is a good deal less complex, though, than that of people who’ve abandoned the prophetic religions of their immediate ancestors and still want to make sweeping pronouncements about moral goodness and evil. It’s here that the sly, wry, edgy voice of Friedrich Nietzsche becomes an unavoidable presence, because the heart of his philosophy was an exploration of what morality means once a society can no longer believe that its tribal taboos were handed down intact, and will be enforced via thunderbolt or eternal damnation, by the creator of the universe.

Nietzsche’s philosophical writings are easy to misunderstand, and he very likely meant that to be the case. Where Schopenhauer proceeded step by step through a single idea in all its ramifications, showing that the insight at the core of his vision makes sense of the entire world of our experience, Nietzsche wrote in brief essays and aphorisms, detached from one another, dancing from theme to theme. He was less interested in convincing people than in making them think; each of the short passages that makes up his major philosophical works is meant to be read, pondered, and digested on its own. All in all, his books make excellent bathroom reading—and I suspect that Nietzsche himself would have been amused by that approach to his writings..

The gravitational center around which Nietzsche’s various thought experiments orbited, though, was a challenge to the conventional habits of moral discourse in his time and ours. For those who believe in a single, omniscient divine lawgiver, it makes perfect sense to talk about morals in the way that most people in his time and ours do in fact talk about them—that is to say, as though there’s some set of moral rules that are clearly set out and incontrovertibly correct, and the task of the moral philosopher is to badger and bully his readers into doing what they know they ought to do anyway.

From any other perspective, on the other hand, that approach to talking about morals is frankly bizarre. It’s not just that every set of moral rules that claims to have been handed down by the creator of the universe contradicts every other such set, though of course this is true. It’s that every such set of rules has proven unsatisfactory when applied to human beings. The vast amount of unnecessary misery that’s resulted from historical Christianity’s stark terror of human sexuality is a case in point, though it’s far from the only example, and far from the worst.

Yet, of course, most of us do talk about moral judgments as though we know what we’re talking about, and that’s where Nietszche comes in. Here’s his inimitable voice, from the preface to Beyond Good and Evil, launching a discussion of the point at issue:

“Supposing truth to be a woman—what? Is the suspicion not well founded that all philosophers, when they have been dogmatists, have had little understanding of women? That the gruesome earnestness, the clumsy importunity with which they have hitherto been in the habit of approaching truth have been inept and improper means for winning a wench? Certainly she has not let herself be won—and today every kind of dogmatism stands sad and discouraged.”

Nietzsche elsewhere characterized moral philosophy as the use of bad logic to prop up inherited prejudices. The gibe’s a good one, and generally far more accurate than not, but again it’s easy to misunderstand. Nietzsche was not saying that morality is a waste of time and we all ought to run out and do whatever happens to come into our heads, from whatever source. He was saying that we don’t yet know the first thing about morality, because we’ve allowed bad logic and inherited prejudices to get in the way of asking the necessary questions—because we haven’t realized that we don’t yet have any clear idea of how to live.

To a very great extent, if I may insert a personal reflection here, this realization has been at the heart of this blog’s project since its beginning. The peak oil crisis that called The Archdruid Report into being came about because human beings have as yet no clear idea how to get along with the biosphere that supports all our lives; the broader theme that became the core of my essays here over the years, the decline and fall of industrial civilization, shows with painful clarity that human beings have as yet no clear idea how to deal with the normal and healthy cycles of historical change; the impending fall of the United States’ global empire demonstrates the same point on a more immediate and, to my American readers, more personal scale. Chase down any of the varied ramblings this blog has engaged in over the years, and you’ll find that most if not all of them have the same recognition at their heart: we don’t yet know how to live, and maybe we should get to work figuring that out.

***
I’d like to wind up this week’s post with three announcements. First of all, I’m delighted to report that the latest issue of the deindustrial-SF quarterly Into the Ruins is now available. Those of you who’ve read previous issues know that you’re in for a treat; those who haven’t—well, what are you waiting for? Those of my readers who bought a year’s subscription when Into the Ruins first launched last year should also keep in mind that it’s time to re-up, and help support one of the few venues for science fiction about the kind of futures we’re actually likely to get once the fantasy of perpetual progress drops out from under us and we have to start coping with the appalling mess that we’ve made of things.

***
Second, I’m equally delighted to announce that a book of mine that’s been out of print for some years is available again. The Academy of the Sword is the most elaborate manual of sword combat ever written; it was penned in the early seventeenth century by Gerard Thibault, one of the greatest European masters of the way of the sword, and published in 1630, and it bases its wickedly effective fencing techniques on Renaissance Pythagorean sacred geometry. I spent almost a decade translating it out of early modern French and finally got it into print in 2006, but the original publisher promptly sank under a flurry of problems that were partly financial and partly ethical. Now the publisher of my books Not the Future We Ordered and Twilight’s Last Gleaming has brought it back into print in an elegant new hardback edition. New editions of my first two published books, Paths of Wisdom and Circles of Power, are under preparation with the same publisher as I write this, so it’s shaping up to be a pleasant spring for me.

***
Finally, this will be the last post of The Archdruid Report for a while. I have a very full schedule in the weeks immediately ahead, and several significant changes afoot in my life, and won’t be able to keep up the weekly pace of blog posts while those are happening. I’m also busily sorting through alternative platforms for future blogging and social media—while I’m grateful to Blogger for providing a free platform for my blogging efforts over the past eleven years, each recent upgrade has made it more awkward to use, and it’s probably time to head elsewhere. When I resume blogging, it will thus likely be on a different platform, and quite possibly with a different name and theme. I’ll post something here and on the other blog once things get settled. In the meantime, have a great spring, and keep asking the hard questions even when the talking heads insist they have all the answers.

The Magic Lantern Show

The philosophy of Arthur Schopenhauer, which we’ve been discussing for the last three weeks, was enormously influential in European intellectual circles from the last quarter of the nineteenth century straight through to the Second World War.  That doesn’t mean that it influenced philosophers; by and large, in fact, the philosophers ignored Schopenhauer completely. His impact landed elsewhere: among composers and dramatists, authors and historians, poets, pop-spirituality teachers—and psychologists.

We could pursue any one of those and end up in the place I want to reach.  The psychologists offer the straightest route there, however, with useful vistas to either side, so that’s the route we’re going to take this week. To the psychologists, two closely linked things mattered about Schopenhauer. The first was that his analysis showed that the thing each of us calls “myself” is a representation rather than a reality, a convenient way of thinking about the loose tangle of competing drives and reactions we’re taught to misinterpret as a single “me” that makes things happen. The second was that his analysis also showed that what lies at the heart of that tangle is not reason, or thinking, or even consciousness, but blind will.

The reason that this was important to them, in turn, was that a rising tide of psychological research in the second half of the nineteenth century made it impossible to take seriously what I’ve called the folk metaphysics of western civilization:  the notion that each of us is a thinking mind perched inside the skull, manipulating the body as though it were a machine, and now and then being jabbed and jolted by the machinery. From Descartes on, as we’ve seen, that way of thinking about the self had come to pervade the western world. The only problem was that it never really worked.

It wasn’t just that it did a very poor job of explaining the way human beings actually relate to themselves, each other, and the surrounding world, though this was certainly true. It also fostered attitudes and behaviors that, when combined with certain attitudes about sexuality and the body, yielded a bumper crop of mental and physical illnesses. Among these was a class of illnesses that seemed to have no physical cause, but caused immense human suffering: the hysterical neuroses.  You don’t see these particular illnesses much any more, and there’s a very good reason for that.

Back in the second half of the nineteenth century, though, a huge number of people, especially but not only in the English-speaking world, were afflicted with apparently neurological illnesses such as paralysis, when their nerves demonstrably had nothing wrong with them. One very common example was “glove anesthesia”: one hand, normally the right hand, would become numb and immobile. From a physical perspective, that makes no sense at all; the nerves that bring feeling and movement to the hand run down the whole arm in narrow strips, so that if there were actually nerve damage, you’d get paralysis in one such strip all the way along the arm. There was no physical cause that could produce glove anesthesia, and yet it was relatively common in Europe and America in those days.

That’s where Sigmund Freud entered the picture.

It’s become popular in recent years to castigate Freud for his many failings, and since some of those failings were pretty significant, this hasn’t been difficult to do. More broadly, his fate is that of all thinkers whose ideas become too widespread: most people forget that somebody had to come up with the ideas in the first place. Before Freud’s time, a phrase like “the conscious self” sounded redundant—it had occurred to very, very few people that there might be any other kind—and the idea that desires that were rejected and denied by the conscious self might seep through the crawlspaces of the psyche and exert an unseen gravitational force on thought and behavior would have been dismissed as disgusting and impossible, if anybody had even thought of it in the first place.

From the pre-Freud perspective, the mind was active and the body was passive; the mind was conscious and the body was incapable of consciousness; the mind was rational and the body was incapable of reasoning; the mind was masculine and the body was feminine; the mind was luminous and pure and the body was dark and filthy.  These two were the only parts of the self; nothing else need apply, and physicians, psychologists, and philosophers alike went out of their way to raise high barriers between the two. This vision of the self, in turn, was what Freud destroyed.

We don’t need to get into the details of his model of the self or his theory of neurosis; most of those have long since been challenged by later research. What mattered, ironically enough, wasn’t Freud’s theories or his clinical skills, but his immense impact on popular culture. It wasn’t all that important, for example, what evidence he presented that glove anesthesia is what happens when someone feels overwhelming guilt about masturbating, and unconsciously resolves that guilt by losing the ability to move or feel the hand habitually used for that pastime.

What mattered was that once a certain amount of knowledge of Freud’s theories spread through popular culture, anybody who had glove anesthesia could be quite sure that every educated person who found out about it would invariably think, “Guess who’s been masturbating!” Since one central point of glove anesthesia was to make a symbolic display of obedience to social convention—“See, I didn’t masturbate, I can’t even use that hand!”—the public discussion of the sexual nature of that particular neurosis made the neurosis itself too much of an embarrassment to put on display.

The frequency of glove anesthesia, and a great many other distinctive neuroses of sexual origin, thus dropped like a rock once Freud’s ideas became a matter of general knowledge. Freud therefore deserves the honor of having extirpated an entire class of diseases from the face of the earth. That the theories that accomplished this feat were flawed and one-sided simply adds to his achievement.

Like so many pioneers in the history of ideas, you see, Freud made the mistake of overgeneralizing from success, and ended up convincing himself and a great many of his students that sex was the only unstated motive that mattered. There, of course, he was quite wrong, and those of his students who were willing to challenge the rapid fossilization of Freudian orthodoxy quickly demonstrated this. Alfred Adler, for example, showed that unacknowledged cravings for power, ranging along the whole spectrum from the lust for domination to the longing for freedom and autonomy, can exert just as forceful a gravitational attraction on thought and behavior as sexuality.

Carl Jung then upped the ante considerably by showing that there is also an unrecognized motive, apparently hardwired in place, that pushed the tangled mess of disparate drives toward states of increased integration. In a few moments we’ll be discussing Jung in rather more detail, as some of his ideas mesh very well indeed with the Schopenhauerian vision we’re pursuing in this sequence of posts. What’s relevant at this point in the discussion is that all the depth psychologists—Freud and the Freudians, Adler and the Adlerians, Jung and the Jungians, not to mention their less famous equivalents—unearthed a great deal of evidence showing that the conscious thinking self, the supposed lord and master of the body, was froth on the surface of a boiling cauldron, much of whose contents was unmentionable in polite company.

Phenomena such as glove anesthesia played a significant role in that unearthing. When someone wracked by guilt about masturbating suddenly loses all feeling and motor control in one hand, when a psychosomatic illness crops up on cue to stop you from doing something you’ve decided you ought to do but really, really, don’t want to do, or when a Freudian slip reveals to all present that you secretly despise the person whom, for practical reasons, you’re trying to flatter—just who is making that decision? Who’s in charge? It’s certainly not the conscious thinking self, who as often as not is completely in the dark about the whole thing and is embarrassed or appalled by the consequences.

The quest for that “who,” in turn, led depth psychologists down a great many twisting byways, but the most useful of them for our present purposes was the one taken by Carl Jung.

Like Freud, Jung gets castigated a lot these days for his failings, and in particular it’s very common for critics to denounce him as an occultist. As it happens, this latter charge is very nearly accurate.  It was little more than an accident of biography that landed him in the medical profession and sent him chasing after the secrets of the psyche using scientific methods; he could have as easily become a professional occultist, part of the thriving early twentieth century central European occult scene with which he had so many close connections throughout his life. The fact remains that he did his level best to pursue his researches in a scientific manner; his first major contribution to psychology was a timed word-association test that offered replicable, quantitative proof of Freud’s theory of repression, and his later theories—however wild they appeared—had a solid base in biology in general, and in particular in ethology, the study of animal behavior.

Ethologists had discovered well before Jung’s time that instincts in the more complex animals seem to work by way of hardwired images in the nervous system. When goslings hatch, for example, they immediately look for the nearest large moving object, which becomes Mom. Ethologist Konrad Lorenz became famous for deliberately triggering that reaction, and being instantly adopted by a small flock of goslings, who followed him dutifully around until they were grown. (He returned the favor by feeding them and teaching them to swim.) What Jung proposed, on the basis of many years of research, is that human beings also have such hardwired images, and a great deal of human behavior can be understood best by watching those images get triggered by outside stimuli.

Consider what happens when a human being falls in love. Those who have had that experience know that there’s nothing rational about it. Something above or below or outside the thinking mind gets triggered and fastens onto another person, who suddenly sprouts an alluring halo visible only to the person in love; the thinking mind gets swept away, shoved aside, or dragged along sputtering and complaining the whole way; the whole world gets repainted in rosy tints—and then, as often as not, the nonrational factor shuts off, and the former lover is left wondering what on Earth he or she was thinking—which is of course exactly the wrong question, since thinking had nothing to do with it.

This, Jung proposed, is the exact equivalent of the goslings following Konrad Lorenz down to the lake to learn how to swim. Most human beings have a similar set of reactions hardwired into their nervous systems, put there over countless generations of evolutionary time, which has evolved for the purpose of establishing the sexual pair bonds that play so important a role in human life. Exactly what triggers those reactions varies significantly from person to person, for reasons that (like most aspects of human psychology) are partly genetic, partly epigenetic, partly a matter of environment and early experience, and partly unknown. Jung called the hardwired image at the center of that reaction an archetype, and showed that it surfaces in predictable ways in dreams, fantasies, and other contexts where the deeper, nonrational levels come within reach of consciousness.

The pair bonding instinct isn’t the only one that has its distinctive archetype. There are several others. For example, there’s a mother-image and a father-image, which are usually (but not always) triggered by the people who raise an infant, and may be triggered again at various points in later life by other people. Another very powerful archetype is the image of the enemy, which Jung called the Shadow. The Shadow is everything you hate, which means in effect that it’s everything you hate about yourself—but inevitably, until a great deal of self-knowledge has been earned the hard way, that’s not apparent at all. Just as the Anima or Animus, the archetypal image of the lover, is inevitably projected onto other human beings, so is the Shadow, very often with disastrous results.

In evolutionary terms, the Shadow fills a necessary role. Confronted with a hostile enemy, human or animal, the human or not-quite-human individual who can access the ferocious irrational energies of rage and hatred is rather more likely to come through alive and victorious than the one who can only draw on the very limited strengths of the conscious thinking self. Outside such contexts, though, the Shadow is a massive and recurring problem in human affairs, because it constantly encourages us to attribute all of our own most humiliating and unwanted characteristics to the people we like least, and to blame them for the things we project onto them.

Bigotries of every kind, including the venomous class bigotries I discussed in an earlier post, show the presence of the Shadow.  We project hateful qualities onto every member of a group of people because that makes it easier for us to ignore those same qualities in ourselves. Notice that the Shadow doesn’t define its own content; it’s a dumpster that can be filled with anything that cultural pressures or personal experiences lead us to despise.

Another archetype, though, deserves our attention here, and it’s the one that the Shadow helpfully clears of unwanted content. That’s the ego, the archetype that each of us normally projects upon ourselves. In place of the loose tangle of drives and reactions each of us actually are, a complex interplay of blind pressures striving with one another and with a universe of pressures from without, the archetype of the ego portrays us to ourselves as single, unified, active, enduring, conscious beings. Like the Shadow, the ego-archetype doesn’t define its own content, which is why different societies around the world and throughout time have defined the individual in different ways.

In the industrial cultures of the modern western world, though, the ego-archetype typically gets filled with a familiar set of contents, the ones we discussed in last week’s post: the mind, the conscious thinking self, as distinct from the body, comprising every other aspect of human experience and action. That’s the disguise in which the loose tangle of complex and conflicting will takes in us, and it meets us at first glance whenever we turn our attention to ourselves just as inevitably as the rose-tinted glory of giddy infatuation meets the infatuated lover who glances at his or her beloved, or the snarling, hateful, inhuman grimace of the Shadow meets those who encountes one of the people onto whom they have projected their own unacceptable qualities.

All this, finally, circles back to points I made in the first post in this sequence. The same process of projection we’ve just been observing is the same, in essence, as the one that creates all the other representations that form the world we experience. You look at a coffee cup, again, and you think you see a solid, three-dimensional material object, because you no longer notice the complicated process by which you assemble fragmentary glimpses of unrelated sensory input into the representation we call a coffee cup. In exactly the same way, but to an even greater extent, you don’t notice the processes by which the loose tangle of conflicting wills each of us calls “myself” gets overlaid with the image of the conscious thinking self, which our cultures provide as raw material for the ego-archetype to feed on.

Nor, of course, do you notice the acts of awareness that project warm and alluring emotions onto the person you love, or hateful qualities onto the person you hate. It’s an essential part of the working of the mind that, under normal circumstances, these wholly subjective qualities should be experienced as objective realities. If the lover doesn’t project that roseate halo onto the beloved, if the bigot doesn’t project all those hateful qualities onto whatever class of people has been selected for their object, the archetype isn’t doing its job properly, and it will fail to have its effects—which, again, exist because they’ve proven to be more successful than not over the course of evolutionary time.

Back when Freud was still in medical school, one common entertainment among the well-to-do classes of Victorian Europe was the magic lantern show. A magic lantern is basically an early slide projector; they were used in some of the same ways that PowerPoint presentations are used today, though in the absence of moving visual media, they also filled many of the same niches as movies and television do today. (I’m old enough to remember when slide shows of photos from distant countries were still a tolerably common entertainment, for that matter.) The most lurid and popular of magic lantern shows, though, used the technology to produce spooky images in a darkened room—look, there’s a ghost! There’s a demon! There’s Helen of Troy come back from the dead!  Like the performances of stage magicians, the magic lantern show produced a simulacrum of wonders in an age that had convinced itself that miracles didn’t exist but still longed for them.

The entire metaphor of “projection” used by Jung and other depth psychologists came from these same performances, and it’s a very useful way of making sense of the process in question. An image inside the magic lantern appears to be out there in the world, when it’s just projected onto the nearest convenient surface; in the same way, an image within the loose tangle of conflicting wills we call “ourselves” appears to be out there in the world, when it’s just projected onto the nearest convenient person—or it appears to be the whole truth about the self, when it’s just projected onto the nearest convenient tangle of conflicting wills.

Is there a way out of the magic lantern show? Schopenhauer and Jung both argued that yes, there is—not, to be sure, a way to turn off the magic lantern, but a way to stop mistaking the projections for realities.  There’s a way to stop spending our time professing undying love on bended knee to one set of images projected on blank walls, and flinging ourselves into mortal combat against another set of images so projected; there’s a way, to step back out of the metaphor, to stop confusing the people around us with the images we like to project on them, and interact with them rather than with the images we’ve projected. 

The ways forward that Jung and Schopenhauer offered were different in some ways, though the philosopher’s vision influenced the psychologist’s to a great extent. We’ll get to their road maps as this conversation proceeds; first, though, we’re going to have to talk about some extremely awkward issues, including the festering swamp of metastatic abstractions and lightly camouflaged bullying that goes these days by the name of ethics.

I’ll offer one hint here, though. Just as we don’t actually learn how to love until we find a way to integrate infatuation with less giddy approaches to relating to another person, just as we don’t learn to fight competently until we can see the other guy’s strengths and weaknesses for what they are rather than what our projections would like them to be, we can’t come to terms with ourselves until we stop mistaking the ego-image for the whole loose tangled mess of self, and let something else make its presence felt. As for what that something else might be—why, we’ll get to that in due time.

A Muddle of Mind and Matter

The philosophy of Arthur Schopenhauer, which we’ve been discussing for the last two weeks, has a feature that reliably irritates most people when they encounter it for the first time: it doesn’t divide up the world the way people in modern western societies habitually do. To say, as Schopenhauer does, that the world we experience is a world of subjective representations, and that we encounter the reality behind those representations in will, is to map out the world in a way so unfamiliar that it grates on the nerves. Thus it came as no surprise that last week’s post fielded a flurry of responses trying to push the discussion back onto the more familiar ground of mind and matter.

That was inevitable. Every society has what I suppose could be called its folk metaphysics, a set of beliefs about the basic nature of existence that are taken for granted by most people in that society, and the habit of dividing the world of our experience into mind and matter is among the core elements of the folk metaphysics of the modern western world. Most of us think of it, on those occasions when we think of it at all, as simply the way the world is. It rarely occurs to most of us that there’s any other way to think of things—and when one shows up, a great many of us back away from it as fast as possible.

Yet dividing the world into mind and matter is really rather problematic, all things considered. The most obvious difficulty is the relation between the two sides of the division. This is usually called the mind-body problem, after the place where each of us encounters that difficulty most directly. Grant for the sake of argument that each of us really does consist of a mind contained in a material body, how do these two connect? It’s far from easy to come up with an answer that works.

Several approaches have been tried in the attempt to solve the mind-body problem. There’s dualism, which is the claim that there are two entirely different and independent kinds of things in the world—minds and bodies—and requires proponents to comes up with various ways to justify the connection between them. First place for philosophical brashness in this connection goes to Rene Descartes, who argued that the link was directly and miraculously caused by the will of God. Plenty of less blatant methods of handwaving have been used to accomplish the same trick, but all of them require question-begging maneuvers of various kinds, and none has yet managed to present any kind of convincing evidence for itself.

Then there are the reductionistic monisms, which attempt to account for the relationship of mind and matter by reducing one of them to the other. The most popular reductionistic monism these days is reductionistic materialism, which claims that what we call “mind” is simply the electrochemical activity of those lumps of matter we call human brains. Though it’s a good deal less popular these days, there’s also reductionistic idealism, which claims that what we call “matter” is the brought into being by the activity of minds, or of Mind.

Further out still, you get the eliminative monisms, which deal with the relationship between mind and matter by insisting that one of them doesn’t exist. There are eliminative materialists, for example, who insist that mental experiences don’t exist, and our conviction that we think, feel, experience pain and pleasure, etc. is an “introspective illusion.” (I’ve often thought that one good response to such a claim would be to ask, “Do you really think so?” The consistent eliminative materialist would have to answer “No.”) There are also eliminative idealists, who insist that matter doesn’t exist and that all is mind.

There’s probably been as much effort expended in attempting to solve the mind-body problem as any other single philosophical issue has gotten in modern times, and yet it remains the focus of endless debates even today. That sort of intellectual merry-go-round is usually a pretty good sign that the basic assumptions at the root of the question have some kind of lethal flaw. That’s particularly true when this sort of ongoing donnybrook isn’t the only persistent difficulty surrounding the same set of ideas—and that’s very much the case here.

After all, there’s a far more personal sense in which the phrase “mind-body problem” can be taken. To speak in the terms usual for our culture, this thing we’re calling “mind” includes only a certain portion of what we think of as our inner lives. What, after all, counts as “mind”? In the folk metaphysics of our culture, and in most of the more formal systems of thought based on it, “mind” is consciousness plus the thinking and reasoning functions, perhaps with intuition (however defined) tied on like a squirrel’s  tail to the antenna of an old-fashioned jalopy. The emotions aren’t part of mind, and neither are such very active parts of our lives as sexual desire and the other passions; it sounds absurd, in fact, to talk about “the emotion-body problem” or the “passion-body problem.” Why does it sound absurd? Because, consciously or unconsciously, we assign the emotions and the passions to the category of “body,” along with the senses.

This is where we get the second form of the mind-body problem, which is that we’re taught implicitly and explicitly that the mind governs the body, and yet the functions we label “body” show a distinct lack of interest in obeying the functions we call “mind.” Sexual desire is of course the most obvious example. What people actually desire and what they think they ought to desire are quite often two very different things, and when the “mind” tries to bully the “body” into desiring what the “mind” thinks it ought to desire, the results are predictably bad. Add enough moral panic to the mix, in fact, and you end up with sexual hysteria of the classic Victorian type, in which the body ends up being experienced as a sinister Other responding solely to its own evil propensities, the seductive wiles of other persons, or the machinations of Satan himself despite all the efforts of the mind to rein it in.

Notice the implicit hierarchy woven into the folk metaphysics just sketched out, too. Mind is supposed to rule matter, not the other way around; mind is active, while matter is passive or, at most, subject to purely mechanical pressures that make it lurch around in predictable ways. When things don’t behave that way, you tend to see people melt down in one way or another—and the universe being what it is, things don’t actually behave that way very often, so the meltdowns come at regular intervals.

They also arrive in an impressive range of contexts, because the way of thinking about things that divides them into mind and matter is remarkably pervasive in western societies, and pops up in the most extraordinary places.  Think of the way that our mainstream religions portray God as the divine Mind ruling omnipotently over a universe of passive matter; that’s the ideal toward which our notions of mind and body strive, and predictably never reach. Think of the way that our entertainment media can always evoke a shudder of horror by imagining something we assign to the category of lifeless matter—a corpse in the case of zombie flicks, a machine in such tales as Stephen King’s Christine, or what have you—suddenly starts acting as though it possesses a mind.

For that matter, listen to the more frantic end of the rhetoric on the American left following the recent presidential election and you’ll hear the same theme echoing off the hills. The left likes to think of itself as the smart people, the educated people, the sensitive and thoughtful and reasonable people—in effect, the people of Mind. The hate speech that many of them direct toward their political opponents leans just as heavily on the notion that these latter are stupid, uneducated, insensitive, irrational, and so on—that is to say, the people of Matter. Part of the hysteria that followed Trump’s election, in turn, might best be described as the political equivalent of the instinctive reaction to a zombie flick: the walking dead have suddenly lurched out of their graves and stalked toward the ballot box, the body politic has rebelled against its self-proclaimed mind!

Let’s go deeper, though. The habit of dividing the universe of human experience into mind and matter isn’t hardwired into the world, or for that matter into human consciousness; there have been, and are still, societies in which people simply don’t experience themselves and the world that way. The mind-body problem and the habits of thought that give rise to it have a history, and it’s by understanding that history that it becomes possible to see past the problem toward a solution.

That history takes its rise from an interesting disparity among the world’s great philosophical traditions. The three that arose independently—the Chinese, the Indian, and the Greek—focused on different aspects of humanity’s existence in the world. Chinese philosophy from earliest times directed its efforts to understanding the relationship between the individual and society; that’s why the Confucian mainstream of Chinese philosophy is resolutely political and social in its focus, exploring ways that the individual can find a viable place within society, and the alternative Taoist tradition in its oldest forms (before it absorbed mysticism from Indian sources) focused on ways that the individual can find a viable place outside society. Indian philosophy, by contrast, directed its efforts to understanding the nature of individual existence itself; that’s why the great Indian philosophical schools all got deeply into epistemology and ended up with a strong mystical bent.

The Greek philosophical tradition, in turn, went to work on a different set of problems. Greek philosophy, once it got past its initial fumblings, fixed its attention on the world of thought. That’s what led Greek thinkers to transform mathematics from a unsorted heap of practical techniques to the kind of ordered system of axioms and theorems best exemplified by Euclid’s Elements of Geometry, and it’s also what led Greek thinkers in the same generation as Euclid to create logic, one of the half dozen or so greatest creations of the human mind. Yet it also led to something considerably more problematic: the breathtaking leap of faith by which some of the greatest intellects of the ancient world convinced themselves that the structure of their thoughts was the true structure of the universe, and that thoughts about things were therefore more real than the things themselves.

The roots of that conviction go back all the way to the beginnings of Greek philosophy, but it really came into its own with Parmenides, an important philosopher of the generation immediately before Plato. Parmenides argued that there were two ways of understanding the world, the way of truth and the way of opinion; the way of opinion consisted of understanding the world as it appears to the senses, which according to Parmenides means it’s false, while the way of truth consisted of understanding the world the way that reason proved it had to be, even when this contradicted the testimony of the senses. To be sure, there are times and places where the testimony of the senses does indeed need to be corrected by logic, but it’s at least questionable whether this should be taken anything like as far as Parmenides took it—he argued, for example, that motion was logically impossible, and so nothing ever actually moves, even though it seems that way to our deceiving senses.

The idea that thoughts about things are more real than things settled into what would be its classic form in the writings of Plato, who took Parmenides’ distinction and set to work to explain the relationship between the worlds of truth and opinion. To Plato, the world of truth became a world of forms or ideas, on which everything in the world of sensory experience is modeled. The chair we see, in other words, is a projection or reflection downwards into the world of matter of the timeless, pure, and perfect form or idea of chair-ness. The senses show us the projections or reflections; the reasoning mind shows us the eternal form from which they descend.

That was the promise of classic Platonism—that the mind could know the truth about the universe directly, without the intervention of the senses, the same way it could know the truth of a mathematical demonstration. The difficulty with this enticing claim, though, was that when people tried to find the truth about the universe by examining their thinking processes, no two of them discovered exactly the same truth, and the wider the cultural and intellectual differences between them, the more different the truths turned out to be. It was for this reason among others that Aristotle, whose life’s work was basically that of cleaning up the mess that Plato and his predecessors left behind, made such a point of claiming that nothing enters the mind except through the medium of the senses. It’s also why the Academy, the school founded by Plato, in the generations immediately after his time took a hard skeptical turn, and focused relentlessly on the limits of human knowledge and reasoning.

Later on, Greek philosophy and its Roman foster-child headed off in other directions—on the one hand, into ethics, and the question of how to live the good life in a world where certainty isn’t available; on the other, into mysticism, and the question of whether the human mind can experience the truth of things directly through religious experience. A great deal of Plato’s thinking, however, got absorbed by the Christian religion after the latter clawed its way to respectability in the fourth century CE.

Augustine of Hippo, the theologian who basically set the tone of Christianity in the west for the next fifteen centuries, had been a Neoplatonist before he returned to his Christian roots, and he was far from the only Christian of that time to drink deeply from Plato's well. In his wake, Platonism became the standard philosophy of the western church until it was displaced by a modified version of Aristotle’s philosophy in the high Middle Ages. Thinkers divided the human organism into two portions, body and soul, and began the process by which such things as sexuality and the less angelic emotions got exiled from the soul into the body.

Even after Thomas Aquinas made Aristotle popular again, the basic Parmenidean-Platonic notion of truth had been so thoroughly bolted into Christian theology that it rode right over any remaining worries about the limitations of human reason. The soul trained in the use of reason could see straight to the core of things, and recognize by its own operations such basic religious doctrines as the existence of God:  that was the faith with which generations of scholars pursued the scholastic philosophy of medieval times, and those who disagreed with them rarely quarreled over their basic conception—rather, the point at issue was whether the Fall had left the human mind so vulnerable to the machinations of Satan that it couldn’t count on its own conclusions, and the extent to which divine grace would override Satan’s malicious tinkerings anywhere this side of heaven.

If you happen to be a devout Christian, such questions make sense, and they matter. It’s harder to see how they still made sense and mattered as the western world began moving into its post-Christian era in the eighteenth century, and yet the Parmenidean-Platonic faith in the omnipotence of reason gained ground as Christianity ebbed among the educated classes. People stopped talking about soul and body and started talking about mind and body instead.

Since mind, mens in Latin, was already in common use as a term for the faculty of the soul that handled its thinking and could be trained to follow the rules of reason, that shift was of vast importance. It marked the point at which the passions and the emotions were shoved out of the basic self-concept of the individual in western culture, and exiled to the body, that unruly and rebellious lump of matter in which the mind is somehow caged.

That’s one of the core things that Schopenhauer rejected. As he saw it, the mind isn’t the be-all and end-all of the self, stuck somehow into the prison house of the body. Rather, the mind is a frail and unstable set of functions that surface now and then on top of other functions that are much older, stronger, and more enduring. What expresses itself through all these functions, in turn, is will:  at the most basic primary level, as the will to exist; on a secondary level, as the will to live, with all the instincts and drives that unfold from that will; on a tertiary level, as the will to experience, with all the sensory and cognitive apparatus that unfolds from that will; and on a quaternary level, as the will to understand, with all the abstract concepts and relationships that unfold from that will.

Notice that from this point of view, the structure of thought isn't the structure of the cosmos, just a set of convenient models, and thoughts about things are emphatically not more real than the things themselves.  The things themselves are wills, expressing themselves through their several modes.  The things as we know them are representations, and our thoughts about the things are abstract patterns we create out of memories of representations, and thus at two removes from reality.

Notice also that from this point of view, the self is simply a representation—the ur-representation, the first representation each of us makes in infancy as it gradually sinks in that there’s a part of the kaleidoscope of our experience that we can move at will, and a lot more that we can’t, but still just a representation, not a reality. Of course that’s what we see when we first try to pay attention to ourselves, just as we see the coffee cup discussed in the first post in this series. It takes exacting logical analysis, scientific experimentation, or prolonged introspection to get past the representation of the self (or the coffee cup), realize that it’s a subjective construct rather than an objective reality, and grasp the way that it’s assembled out of disparate stimuli according to preexisting frameworks that are partly hardwired into our species and partly assembled over the course of our lives.

Notice, finally, that those functions we like to call “mind”—in the folk metaphysics of our culture, again, these are consciousness and the capacity to think, with a few other tag-ends of other functions dangling here and there—aren’t the essence of who we are, the ghost in the machine, the Mini-Me perched inside the skull that pushes and pulls levers to control the passive mass of the body and gets distracted by the jabs and lurches of the emotions and passions. The functions we call “mind,” rather, are a set of delicate, tentative, and fragile functions of will, less robust and stable than most of the others, and with no inherent right to rule the other functions. The Schopenhauerian self is an ecosystem rather than a hierarchy, and if what we call “mind” sits at the top of the food chain like a fox in a meadow, that simply means that the fox has to spend much of its time figuring out where mice like to go, and even more of its time sleeping in its den, while the mice scamper busily about and the grass goes quietly about turning sunlight, water and carbon dioxide into the nutrients that support the whole system.

Accepting this view of the self requires sweeping revisions of the ways we like to think about ourselves and the world, which is an important reason why so many people react with acute discomfort when it’s suggested. Nonetheless those revisions are of crucial importance, and as this discussion continues, we’ll see how they offer crucial insights into the problems we face in this age of the world—and into their potential solutions.

The World as Will

It's impressively easy to misunderstand the point made in last week’s post here on The Archdruid Report. To say that the world we experience is made up of representations of reality, constructed in our minds by taking the trickle of data we get from the senses and fitting those into patterns that are there already, doesn’t mean that nothing exists outside of our minds. Quite the contrary, in fact; there are two very good reasons to think that there really is something “out there,” a reality outside our minds that produces the trickle of data we’ve discussed.

The first of those reasons seems almost absurdly simple at first glance: the world doesn’t always make sense to us. Consider, as one example out of godzillions, the way that light seems to behave like a particle on some occasions and like a wave on others. That’s been described, inaccurately, as a paradox, but it’s actually a reflection of the limitations of the human mind.

What, after all, does it mean to call something a particle? Poke around the concept for a while and you’ll find that at root, this concept “particle” is an abstract metaphor, extracted from the common human experience of dealing with little round objects such as pebbles and marbles. What, in turn, is a wave? Another abstract metaphor, extracted from the common human experience of watching water in motion. When a physicist says that light sometimes acts like a particle and sometimes like a wave, what she’s saying is that neither of these two metaphors fits more than a part of the way that light behaves, and we don’t have any better metaphor available.

If the world was nothing but a hallucination projected by our minds, then it would contain nothing that wasn’t already present in our minds—for what other source could there be?  That implies in turn that there would be a perfect match between the contents of the world and the contents of our minds, and we wouldn’t get the kind of mismatch between mind and world that leaves physicists flailing. More generally, the fact that the world so often baffles us offers good evidence that behind the world we experience, the world as representation, there’s some “thing in itself” that’s the source of the sense data we assemble into representations.

The other reason to think that there’s a reality distinct from our representations is that, in a certain sense, we experience such a reality at every moment.

Raise one of your hands to a position where you can see it, and wiggle the fingers. You see the fingers wiggling—or, more precisely, you see a representation of the wiggling fingers, and that representation is constructed in your mind out of bits of visual data, a great deal of memory, and certain patterns that seem to be hardwired into your mind. You also feel the fingers wiggling—or, here again, you feel a representation of the wiggling fingers, which is constructed in your mind out of bits of tactile and kinesthetic data, plus the usual inputs from memory and hardwired patterns. Pay close attention and you might be able to sense the way your mind assembles the visual representation and the tactile one into a single pattern; that happens close enough to the surface of consciousness that a good many people can catch themselves doing it.

So you’ve got a representation of wiggling fingers, part of the world as representation we experience. Now ask yourself this: the action of the will that makes the fingers wiggle—is that a representation?

This is where things get interesting, because the only reasonable answer is no, it’s not. You don’t experience the action of the will as a representation; you don’t experience it at all. You simply wiggle your fingers. Sure, you experience the results of the will’s action in the form of representations—the visual and tactile experiences we’ve just been considering—but not the will itself. If it were true that you could expect to see or hear or feel or smell or taste the impulse of the will rolling down your arm to the fingers, say, it would be reasonable to treat the will as just one more representation. Since that isn’t the case, it’s worth exploring the possibility that in the will, we encounter something that isn’t just a representation of reality—it’s a reality we encounter directly.

That’s the insight at the foundation of Arthur Schopenhauer’s philosophy. Schopenhauer’s one of the two principal guides who are going to show us around the giddy funhouse that philosophy has turned into of late, and guide us to the well-marked exits, so you’ll want to know a little about him. He lived in the ramshackle assortment of little countries that later became the nation of Germany; he was born in 1788 and died in 1860; he got his doctorate in philosophy in 1813; he wrote his most important work, The World as Will and Representation, before he turned thirty; and he spent all but the last ten years of his life in complete obscurity, ignored by the universities and almost everyone else. A small inheritance, carefully managed, kept him from having to work for a living, and so he spent his time reading, writing, playing the flute for an hour a day before dinner, and grumbling under his breath as philosophy went its merry way into metaphysical fantasy. He grumbled a lot, and not always under his breath. Fans of Sesame Street can think of him as philosophy’s answer to Oscar the Grouch.

Schopenhauer came of age intellectually in the wake of Immanuel Kant, whose work we discussed briefly last week, and so the question he faced was how philosophy could respond to the immense challenge Kant threw at the discipline’s feet. Before you go back to chattering about what’s true and what’s real, Kant said in effect, show me that these labels mean something and relate to something, and that you’re not just chasing phantoms manufactured by your own minds.

Most of the philosophers who followed in Kant’s footsteps responded to his challenge by ignoring it, or using various modes of handwaving to pretend that it didn’t matter. One common gambit at the time was to claim that the human mind has a special superpower of intellectual intuition that enables it to leap tall representations in a single bound, and get to a direct experience of reality that way. What that meant in practice, of course, is that philosophers could claim to have intellectually intuited this, that, and the other thing, and then build a great tottering system on top of them. What that meant in practice, of course, that a philosopher could simply treat whatever abstractions he fancied as truths that didn’t have to be proved; after all, he’d intellectually intuited them—prove that he hadn’t!

There were other such gimmicks. What set Schopenhauer apart was that he took Kant’s challenge seriously enough to go looking for something that wasn’t simply a representation. What he found—why, that brings us back to the wiggling fingers.

As discussed in last week’s post, every one of the world’s great philosophical traditions has ended up having to face the same challenge Kant flung in the face of the philosophers of his time. Schopenhauer knew this, since a fair amount of philosophy from India had been translated into European languages by his time, and he read extensively on the subject. This was helpful because Indian philosophy hit its own epistemological crisis around the tenth century BCE, a good twenty-nine centuries before Western philosophy got there, and so had a pretty impressive head start. There’s a rich diversity of responses to that crisis in the classical Indian philosophical schools, but most of them came to see consciousness as a (or the) thing-in-itself, as reality rather than representation.

It’s a plausible claim. Look at your hand again, with or without wiggling fingers. Now be aware of yourself looking at the hand—many people find this difficult, so be willing to work at it, and remember to feel as well as see. There’s your hand; there’s the space between your hand and your eyes; there’s whatever of your face you can see, with or without eyeglasses attached; pay close attention and you can also feel your face and your eyes from within; and then there’s—

There’s the thing we call consciousness, the whatever-it-is that watches through your eyes. Like the act of will that wiggled your fingers, it’s not a representation; you don’t experience it. In fact, it’s very like the act of will that wiggled your fingers, and that’s where Schopenhauer went his own way.

What, after all, does it mean to be conscious of something? Some simple examples will help clarify this. Move your hand until it bumps into something; it’s when something stops the movement that you feel it. Look at anything; you can see it if and only if you can’t see through it. You are conscious of something when, and only when, it resists your will.

That suggested to Schopenhauer that consciousness derives from will, not the other way around. There were other lines of reasoning that point in the same direction, and all of them derive from common human experiences. For example, each of us stops being conscious for some hours out of every day, whenever we go to sleep. During part of the time we’re sleeping, we experience nothing at all; during another part, we experience the weirdly disconnected representations we call “dreams.”  Even in dreamless sleep, though, it’s common for a sleeper to shift a limb away from an unpleasant stimulus. Thus the will is active even when consciousness is absent.

Schopenhauer proposed that there are different forms or, as he put it, grades of the will. Consciousness, which we can define for present purposes as the ability to experience representations, is one grade of the will—one way that the will can adapt to existence in a world that often resists it. Life is another, more basic grade. Consider the way that plants orient themselves toward sunlight, bending and twisting like snakes in slow motion, and seek out concentrations of nutrients with probing, hungry roots. As far as anyone knows, plants aren’t conscious—that is, they don’t experience a world of representations the way that animals do—but they display the kind of goal-seeking behavior that shows the action of will.

Animals also show goal-seeking behavior, and they do it in a much more complex and flexible way than plants do. There’s good reason to think that many animals are conscious, and experience a world of representations in something of the same way we do; certainly students of animal behavior have found that animals let incidents from the past shape their actions in the present, mistake one person for another, and otherwise behave in ways that suggest that their actions are guided, as ours are, by representations rather than direct reaction to stimuli. In animals, the will has developed the ability to represent its environment to itself.

Animals, at least the more complex ones, also have that distinctive mode of consciousness we call emotion. They can be happy, sad, lonely, furious, and so on; they feel affection for some beings and aversion toward others. Pay attention to your own emotions and you’ll soon notice how closely they relate to the will. Some emotions—love and hate are among them—are motives for action, and thus expressions of will; others—happiness and sadness are among them—are responses to the success or failure of the will to achieve its goals. While emotions are tangled up with representations in our minds, and presumably in those of animals as well, they stand apart; they’re best understood as conditions of the will, expressions of its state as it copes with the world through its own representations.

And humans? We’ve got another grade of the will, which we can call intellect:  the ability to add up representations into abstract concepts, which we do, ahem, at will. Here’s one representation, which is brown and furry and barks; here’s another like it; here’s a whole kennel of them—and we lump them all together in a single abstract category, to which we assign a sound such as “dog.” We can then add these categories together, creating broader categories such as “quadruped” and “pet;” we can subdivide the categories to create narrower ones such as “puppy” and “Corgi;” we can extract qualities from the whole and treat them as separate concepts, such as “furry” and “loud;” we can take certain very general qualities and conjure up the entire realm of abstract number, by noticing how many paws most dogs have and using that, and a great many other things, to come up with the concept of “four.”

So life, consciousness, and intellect are three grades of the will. One interesting thing about them is that the more basic ones are more enduring and stable than the more complex ones. Humans, again, are good examples. Humans remain alive all the way from birth to death; they’re conscious only when awake; they’re intelligent only when actively engaged in thinking—which is a lot less often than we generally like to admit. A certain degree of tiredness, a strong emotion, or a good stiff drink are usually enough to shut off the intellect and leave us dealing with the world on the same mental basis as an ordinarily bright dog; it takes quite a bit more to reduce us to the vegetative level, and serious physical trauma to go one more level down.

Let’s take a look at that final level, though. The conventional wisdom of our age holds that everything that exists is made up of something called “matter,” which is configured in various ways; further, that matter is what really exists, and everything else is somehow a function of matter if it exists at all. For most of us, this is the default setting, the philosophical opinion we start from and come back to, and anyone who tries to question it can count on massive pushback.

The difficulty here is that philosophers and scientists have both proved, in their own ways, that the usual conception of matter is quite simply nonsense. Any physical scientist worth his or her sodium chloride, to begin with, will tell you that what we habitually call “solid matter” is nearly as empty as the vacuum of deep space—a bit of four-dimensional curved spacetime that happens to have certain tiny probability waves spinning dizzily in it, and it’s the interaction between those probability waves and those composing that other patch of curved spacetime we each call “my body” that creates the illusions of solidity, color, and the other properties we attribute to matter.

The philosophers got to the same destination a couple of centuries earlier, and by a different route. The epistemologists I mentioned in last week’s post—Locke, Berkeley, and Hobbes—took the common conception of matter apart layer by layer and showed, to use the formulation we’ve already discussed, that all the things we attribute to matter are simply representations in the mind. Is there something out there that causes those representations? As already mentioned, yes, there’s very good reason to think so—but that doesn’t mean that the “something out there” has to consist of matter in any sense of the word that means anything.

That’s where Schopenhauer got to work, and once again, he proceeded by calling attention to certain very basic and common human experiences. Each of us has direct access, in a certain sense, to one portion of the “something out there,” the portion each of us calls “my body.” When we experience our bodies, we experience them as representations, just like anything else—but we also act with them, and as the experiment with the wiggling fingers demonstrated, the will that acts isn’t a representation.

Thus there’s a boundary between the part of the universe we encounter as will and representation, and the part we encounter only as representation. The exact location of that boundary is more complex than it seems at first sight. It’s a commonplace in the martial arts, for example, that a capable martial artist can learn to feel with a weapon as though it were a part of the body. Many kinds of swordsmanship, for example, rely on what fencers call sentiment de fer, the “sense of the steel;” the competent fencer can feel the lightest touch of the other blade against his own, just as though it brushed his hand.

There are also certain circumstances—lovemaking, dancing, ecstatic religious experience, and mob violence are among them—in which under certain hard-to-replicate conditions, two or more people seem to become, at least briefly, a single entity that moves and acts with a will of its own. All of those involve a shift from the intellect to a more basic grade of the will, and they lead in directions that will deserve a good deal more examination later on; for now, the point at issue is that the boundary line between self and other can be a little more fluid than we normally tend to assume.

For our present purposes, though, we can set that aside and focus on the body as the part of the world each of us encounters in a twofold way: as a representation among representations, and as a means of expression for the will.  Everything we perceive about our bodies is a representation, but by noticing these representations, we observe the action of something that isn’t a representation, something we call the will, manifesting in its various grades. That’s all there is. Go looking as long as you want, says Schopenhauer, and you won’t find anything but will and representations. What if that’s all there is—if the thing we call "matter" is simpy the most basic grade of the will, and everything in the world thus amounts to will on the one hand, and representations experienced by that mode of will we call consciousness on the other, and the thing that representations are representing are various expressions of this one energy that, by way of its distinctive manifestations in our own experience, we call the will?

That’s Schopenhauer’s vision. The remarkable thing is how close it is to the vision that comes out of modern science. A century before quantum mechanics, he’d already grasped that behind the facade of sensory representations that you and I call matter lies an incomprehensible and insubstantial reality, a realm of complex forces dancing in the void. Follow his arguments out to their logical conclusion and you get a close enough equivalent of the universe of modern physics that it’s not at all implausible that they’re one and the same. Of course plausibility isn’t proof—but given the fragile, dependent, and derivative nature of the human intellect, it may be as close as we can get.

And of course that latter point is a core reason why Arthur Schopenhauer spent most of his life in complete obscurity and why, after a brief period of mostly posthumous superstardom in the late nineteenth century, his work dropped out of sight and has rarely been noticed since. (To be precise, it’s one of two core reasons; we’ll get to the other one later.) If he’s right, then the universe is not rational. Reason—the disciplined use of the grade of will I’ve called the intellect—isn’t a key to the truth of things.  It’s simply the systematic exploitation of a set of habits of mind that turned out to be convenient for our ancestors as they struggled with the hard but intellectually undemanding tasks of staying fed, attracting mates, chasing off predators, and the like, and later on got pulled out of context and put to work coming up with complicated stories about what causes the representations we experience.

To suggest that, much less to back it up with a great deal of argument and evidence, is to collide head on with one of the most pervasive presuppositions of our culture. We’ll survey the wreckage left behind by that collision in next week’s post.

The World as Representation

It can be hard to remember these days that not much more than half a century ago, philosophy was something you read about in general-interest magazines and the better grade of newspapers. Existentialist philosopher Jean-Paul Sartre was an international celebrity; the posthumous publication of Pierre Teilhard de Chardin’s Le Phenomenon Humaine (the English translation, predictably, was titled The Phenomenon of Man) got significant flurries of media coverage; Random House’s Vintage Books label brought out cheap mass-market paperback editions of major philosophical writings from Plato straight through to Nietzsche and beyond, and made money off them.

Though philosophy was never really part of the cultural mainstream, it had the same kind of following as avant-garde jazz, say, or science fiction.  At any reasonably large cocktail party you had a pretty fair chance of meeting someone who was into it, and if you knew where to look in any big city—or any college town with pretensions to intellectual culture, for that matter—you could find at least one bar or bookstore or all-night coffee joint where the philosophy geeks hung out, and talked earnestly into the small hours about Kant or Kierkegaard. What’s more, that level of interest in the subject had been pretty standard in the Western world for a very long time.

We’ve come a long way since then, and not in a particularly useful direction. These days, if you hear somebody talk about philosophy in the media, it’s probably a scientific materialist like Neil deGrasse Tyson ranting about how all philosophy is nonsense. The occasional work of philosophical exegesis still gets a page or two in the New York Review of Books now and then, but popular interest in the subject has vanished, and more than vanished: the sort of truculent ignorance about philosophy displayed by Tyson and his many equivalents has become just as common among the chattering classes as a feigned interest in the subject was a half century in the past.

Like most human events, the decline of philosophy in modern times was overdetermined; like the victim in the murder-mystery paperback who was shot, strangled, stabbed, poisoned, whacked over the head with a lead pipe, and then shoved off a bridge to drown, there were more causes of death than the situation actually required. Part of the problem, certainly, was the explosive expansion of the academic industry in the US and elsewhere in the second half of the twentieth century.  In an era when every state teacher’s college aspired to become a university and every state university dreamed of rivaling the Ivy League, a philosophy department was an essential status symbol. The resulting expansion of the field was not necessarily matched by an equivalent increase in genuine philosophers, but it was certainly followed by the transformation of university-employed philosophy professors into a professional caste which, as such castes generally do, defended its status by adopting an impenetrable jargon and ignoring or rebuffing attempts at participation from outside its increasingly airtight circle.

Another factor was the rise of the sort of belligerent scientific materialism exemplified, as noted earlier, by Neil deGrasse Tyson. Scientific inquiry itself is philosophically neutral—it’s possible to practice science from just about any philosophical standpoint you care to name—but the claim at the heart of scientific materialism, the dogmatic insistence that those things that can be investigated using scientific methods and explained by current scientific theory are the only things that can possibly exist, depends on arbitrary metaphysical postulates that were comprehensively disproved by philosophers more than two centuries ago. (We’ll get to those postulates and their problems later on.) Thus the ascendancy of scientific materialism in educated culture pretty much mandated the dismissal of philosophy.

There were plenty of other factors as well, most of them having no more to do with philosophy as such than the ones just cited. Philosophy itself, though, bears some of the responsibility for its own decline. Starting in the seventeenth century and reaching a crisis point in the nineteenth, western philosophy came to a parting of the ways—one that the philosophical traditions of other cultures reached long before it, with similar consequences—and by and large, philosophers and their audiences alike chose a route that led to its present eclipse. That choice isn’t irreparable, and there’s much to be gained by reversing it, but it’s going to take a fair amount of hard intellectual effort and a willingness to abandon some highly popular shibboleths to work back to the mistake that was made, and undo it.

To help make sense of what follows, a concrete metaphor might be useful. If you’re in a place where there are windows nearby, especially if the windows aren’t particularly clean, go look out through a window at the view beyond it. Then, after you’ve done this for a minute or so, change your focus and look at the window rather than through it, so that you see the slight color of the glass and whatever dust or dirt is clinging to it. Repeat the process a few times, until you’re clear on the shift I mean: looking through the window, you see the world; looking at the window, you see the medium through which you see the world—and you might just discover that some of what you thought at first glance was out there in the world was actually on the window glass the whole time.

That, in effect, was the great change that shook western philosophy to its foundations beginning in the seventeenth century. Up to that point, most philosophers in the western world started from a set of unexamined presuppositions about what was true, and used the tools of reasoning and evidence to proceed from those presuppositions to a more or less complete account of the world. They were into what philosophers call metaphysics: reasoned inquiry into the basic principles of existence. That’s the focus of every philosophical tradition in its early years, before the confusing results of metaphysical inquiry refocus attention from “What exists?” to “How do we know what exists?” Metaphysics then gives way to epistemology: reasoned inquiry into what human beings are capable of knowing.

That refocusing happened in Greek philosophy around the fourth century BCE, in Indian philosophy around the tenth century BCE, and in Chinese philosophy a little earlier than in Greece. In each case, philosophers who had been busy constructing elegant explanations of the world on the basis of some set of unexamined cultural assumptions found themselves face to face with hard questions about the validity of those assumptions. In terms of the metaphor suggested above, they were making all kinds of statements about what they saw through the window, and then suddenly realized that the colors they’d attributed to the world were being contributed in part by the window glass and the dust on it, the vast dark shape that seemed to be moving purposefully across the sky was actually a beetle walking on the outside of the window, and so on.

The same refocusing began in the modern world with Rene Descartes, who famously attempted to start his philosophical explorations by doubting everything. That’s a good deal easier said than done, as it happens, and to a modern eye, Descartes’ writings are riddled with unexamined assumptions, but the first attempt had been made and others followed. A trio of epistemologists from the British Isles—John Locke, George Berkeley, and David Hume—rushed in where Descartes feared to tread, demonstrating that the view from the window had much more to do with the window glass than it did with the world outside. The final step in the process was taken by the German philosopher Immanuel Kant, who subjected human sensory and rational knowledge to relentless scrutiny and showed that most of what we think of as “out there,” including such apparently hard realities as space and time, are actually  artifacts of the processes by which we perceive things.

Look at an object nearby: a coffee cup, let’s say. You experience the cup as something solid and real, outside yourself: seeing it, you know you can reach for it and pick it up; and to the extent that you notice the processes by which you perceive it, you experience these as wholly passive, a transparent window on an objective external reality. That’s normal, and there are good practical reasons why we usually experience the world that way, but it’s not actually what’s going on.

What’s going on is that a thin stream of visual information is flowing into your mind in the form of brief fragmentary glimpses of color and shape. Your mind then assembles these together into the mental image of the coffee cup, using your memories of that and other coffee cups, and a range of other things as well, as a template onto which the glimpses can be arranged. Arthur Schopenhauer, about whom we’ll be talking a great deal as we proceed, gave the process we’re discussing the useful label of “representation;” when you look at the coffee cup, you’re not passively seeing the cup as it exists, you’re actively representing—literally re-presenting—an image of the cup in your mind.

There are certain special situations in which you can watch representation at work. If you’ve ever woken up in an unfamiliar room at night, and had a few seconds pass before the dark unknown shapes around you finally turned into ordinary furniture, you’ve had one of those experiences. Another is provided by the kind of optical illusion that can be seen as two different things. With a little practice, you can flip from one way of seeing the illusion to another, and watch the process of representation as it happens.

What makes the realization just described so challenging is that it’s fairly easy to prove that the cup as we represent it has very little in common with the cup as it exists “out there.” You can prove this by means of science: the cup “out there,” according to the evidence collected painstakingly by physicists, consists of an intricate matrix of quantum probability fields and ripples in space-time, which our senses systematically misperceive as a solid object with a certain color, surface texture, and so on. You can also prove this, as it happens, by sheer sustained introspection—that’s how Indian philosophers got there in the age of the Upanishads—and you can prove it just as well by a sufficiently rigorous logical analysis of the basis of human knowledge, which is what Kant did.

The difficulty here, of course, is that once you’ve figured this out, you’ve basically scuttled any chance at pursuing the kind of metaphysics that’s traditional in the formative period of your philosophical tradition. Kant got this, which is why he titled the most relentless of his analyses Prolegomena to Any Future Metaphysics; what he meant by this was that anybody who wanted to try to talk about what actually exists had better be prepared to answer some extremely difficult questions first.  When philosophical traditions hit their epistemological crises, accordingly, some philosophers accept the hard limits on human knowledge, ditch the metaphysics, and look for something more useful to do—a quest that typically leads to ethics, mysticism, or both. Other philosophers double down on the metaphysics and either try to find some way around the epistemological barrier, or simply ignore it, and this latter option is the one that most Western philosophers after Kant ended up choosing.  Where that leads—well, we’ll get to that later on.

For the moment, I want to focus a little more closely on the epistemological crisis itself, because there are certain very common ways to misunderstand it. One of them I remember with a certain amount of discomfort, because I made it myself in my first published book, Paths of Wisdom. This is the sort of argument that sees the sensory organs and the nervous system as the reason for the gap between the reality out there—the “thing in itself” (Ding an Sich), as Kant called it—and the representation as we experience it. It’s superficially very convincing: the eye receives light in certain patterns and turns those into a cascade of electrochemical bursts running up the optic nerve, and the visual centers in the brain then fold, spindle, and mutilate the results into the image we see.

The difficulty? When we look at light, an eye, an optic nerve, a brain, we’re not seeing things in themselves, we’re seeing another set of representations, constructed just as arbitrarily in our minds as any other representation. Nietzsche had fun with this one: “What? and others even go so far as to say that the external world is the work of our organs? But then our body, as a piece of this external world, would be the work of our organs! But then our organs themselves would be—the work of our organs!” That is to say, the body is also a representation—or, more precisely, the body as we perceive it is a representation. It has another aspect, but we’ll get to that in a future post.

Another common misunderstanding of the epistemological crisis is to think that it’s saying that your conscious mind assembles the world, and can do so in whatever way it wishes. Not so. Look at the coffee cup again. Can you, by any act of consciousness, make that coffee cup suddenly sprout wings and fly chirping around your computer desk? Of course not. (Those who disagree should be prepared to show their work.) The crucial point here is that representation is neither a conscious activity nor an arbitrary one. Much of it seems to be hardwired, and most of the rest is learned very early in life—each of us spent our first few years learning how to do it, and scientists such as Jean Piaget have chronicled in detail the processes by which children gradually learn how to assemble the world into the specific meaningful shape their culture expects them to get. 

By the time you’re an adult, you do that instantly, with no more conscious effort than you’re using right now to extract meaning from the little squiggles on your computer screen we call “letters.” Much of the learning process, in turn, involves finding meaningful correlations between the bits of sensory data and weaving those into your representations—thus you’ve learned that when you get the bits of visual data that normally assemble into a coffee cup, you can reach for it and get the bits of tactile data that normally assemble into the feeling of picking up the cup, followed by certain sensations of movement, followed by certain sensations of taste, temperature, etc. corresponding to drinking the coffee.

That’s why Kant included the “thing in itself” in his account: there really does seem to be something out there that gives rise to the data we assemble into our representations. It’s just that the window we’re looking through might as well be a funhouse mirror:  it imposes so much of itself on the data that trickles through it that it’s almost impossible to draw firm conclusions about what’s “out there” from our representations.  The most we can do, most of the time, is to see what representations do the best job of allowing us to predict what the next series of fragmentary sensory images will include. That’s what science does, when its practitioners are honest with themselves about its limitations—and it’s possible to do perfectly good science on that basis, by the way.

It’s possible to do quite a lot intellectually on that basis, in fact. From the golden age of ancient Greece straight through to the end of the Renaissance, in fact, a field of scholarship that’s almost completely forgotten today—topics—was an important part of a general education, the kind of thing you studied as a matter of course once you got past grammar school. Topics is the study of those things that can’t be proved logically, but are broadly accepted as more or less true, and so can be used as “places” (in Greek, topoi) on which you can ground a line of argument. The most important of these are the commonplaces (literally, the common places or topoi) that we all use all the time as a basis for our thinking and speaking; in modern terms, we can think of them as “things on which a general consensus exists.” They aren’t truths; they’re useful approximations of truths, things that have been found to work most of the time, things to be set aside only if you have good reason to do so.

Science could have been seen as a way to expand the range of useful topoi. That’s what a scientific experiment does, after all: it answers the question, “If I do this, what happens?” As the results of experiments add up, you end up with a consensus—usually an approximate consensus, because it’s all but unheard of for repetitions of any experiment to get exactly the same result every time, but a consensus nonetheless—that’s accepted by the scientific community as a useful approximation of the truth, and can be set aside only if you have good reason to do so. To a significant extent, that’s the way science is actually practiced—well, when it hasn’t been hopelessly corrupted for economic or political gain—but that’s not the social role that science has come to fill in modern industrial society.

I’ve written here several times already about the trap into which institutional science has backed itself in recent decades, with the enthusiastic assistance of the belligerent scientific materialists mentioned earlier in this post. Public figures in the scientific community routinely like to insist that the current consensus among scientists on any topic must be accepted by the lay public without question, even when scientific opinion has swung around like a weathercock in living memory, and even when unpleasantly detailed evidence of the deliberate falsification of scientific data is tolerably easy to find, especially but not only in the medical and pharmaceutical fields. That insistence isn’t wearing well; nor does it help when scientific materialists insist—as they very often do—that something can’t exist or something else can’t happen, simply because current theory doesn’t happen to provide a mechanism for it.

Too obsessive a fixation on that claim to authority, and the political and financial baggage that comes with it, could very possibly result in the widespread rejection of science across the industrial world in the decades ahead. That’s not yet set in stone, and it’s still possible that scientists who aren’t too deeply enmeshed in the existing order of things could provide a balancing voice, and help see to it that a less doctrinaire understanding of science gets a voice and a public presence.

Doing that, though, would require an attitude we might as well call epistemic modesty: the recognition that the human capacity to know has hard limits, and the unqualified absolute truth about most things is out of our reach. Socrates was called the wisest of the Greeks because he accepted the need for epistemic modesty, and recognized that he didn’t actually know much of anything for certain. That recognition didn’t keep him from being able to get up in the morning and go to work at his day job as a stonecutter, and it needn’t keep the rest of us from doing what we have to do as industrial civilization lurches down the trajectory toward a difficult future.

Taken seriously, though, epistemic modesty requires some serious second thoughts about certain very deeply ingrained presuppositions of the cultures of the West. Some of those second thoughts are fairly easy to reach, but one of the most challenging starts with a seemingly simple question: is there anything we experience that isn’t a representation? In the weeks ahead we’ll track that question all the way to its deeply troubling destination.