Our age has no shortage of curious features, but for me, at
least, one of the oddest is the way that so many people these days don’t seem
to be able to think through the consequences of their own beliefs. Pick an
ideology, any ideology, straight across the spectrum from the most devoutly
religious to the most stridently secular, and you can count on finding a bumper
crop of people who claim to hold that set of beliefs, and recite them with all
the uncomprehending enthusiasm of a well-trained mynah bird, but haven’t
noticed that those beliefs contradict other beliefs they claim to hold with
equal devotion.
I’m not talking here about ordinary hypocrisy. The
hypocrites we have with us always; our species being what it is, plenty of
people have always seen the advantages of saying one thing and doing another.
No, what I have in mind is saying one thing and saying another, without ever
noticing that if one of those statements is true, the other by definition has
to be false. My readers may recall the way that cowboy-hatted heavies in old
Westerns used to say to each other, “This town ain’t big enough for the two of
us;” there are plenty of ideas and beliefs that are like that, but too many
modern minds resemble nothing so much as an OK Corral where the gunfight never
happens.
An example that I’ve satirized in
an earlier post here is the bizarre way that so many people on the
rightward end of the US political landscape these days claim to be, at one and
the same time, devout Christians and fervid adherents of Ayn Rand’s violently
atheist and anti-Christian ideology. The
difficulty here, of course, is that Jesus tells his followers to humble themselves
before God and help the poor, while Rand told hers to hate God, wallow in
fantasies of their own superiority, and kick the poor into the nearest
available gutter. There’s quite
precisely no common ground between the two belief systems, and yet self-proclaimed
Christians who spout Rand’s turgid drivel at every opportunity make up a
significant fraction of the Republican Party just now.
Still, it’s only fair to point out that this sort of weird
disconnect is far from unique to religious people, or for that matter to
Republicans. One of the places it crops up most often nowadays is the
remarkable unwillingness of people who say they accept Darwin’s theory of
evolution to think through what that theory implies about the limits of human
intelligence.
If Darwin’s right, as I’ve had occasion to point out here
several times already, human intelligence isn’t the world-shaking superpower
our collective egotism likes to suppose. It’s simply a somewhat more
sophisticated version of the sort of mental activity found in many other
animals. The thing that supposedly sets it apart from all other forms of
mentation, the use of abstract language, isn’t all that unique; several species
of cetaceans and an assortment of the brainier birds communicate with their kin
using vocalizations that show all the signs of being languages in the full
sense of the word—that is, structured patterns of abstract vocal signs that
take their meaning from convention rather than instinct.
What differentiates human beings from bottlenosed porpoises,
African gray parrots, and other talking species is the mere fact that in our
case, language and abstract thinking happened to evolve in a species that also
had the sort of grasping limbs, fine motor control, and instinctive drive to
pick things up and fiddle with them, that primates have and most other animals
don’t. There’s no reason why sentience
should be associated with the sort of neurological bias that leads to
manipulating the environment, and thence to technology; as far as the evidence
goes, we just happen to be the one species in Darwin’s evolutionary casino that
got dealt both those cards. For all we know, bottlenosed porpoises have a rich
philosophical, scientific, and literary culture dating back twenty million
years; they don’t have hands, though, so they don’t have technology. All things
considered, this may be an advantage, since it means they won’t have had to
face the kind of self-induced disasters our species is so busy preparing for
itself due to the inveterate primate tendency to, ahem, monkey around with
things.
I’ve long suspected that one of the reasons why human beings
haven’t yet figured out how to carry on a conversation with bottlenosed
porpoises, African gray parrots, et al. in their own language is quite simply that
we’re terrified of what they might say to us—not least because it’s entirely
possible that they’d be right. Another reason for the lack of communication,
though, leads straight back to the limits of human intelligence. If our minds
have emerged out of the ordinary processes of evolution, what we’ve got between
our ears is simply an unusually complex variation on the standard social
primate brain, adapted over millions of years to the mental tasks that are
important to social primates—that is, staying fed, attracting mates, competing
for status, and staying out of the jaws of hungry leopards.
Notice that “discovering the objective truth about the
nature of the universe” isn’t part of this list, and if Darwin’s theory of
evolution is correct—as I believe it to be—there’s no conceivable way it could
be. The mental activities of social primates, and all other living things, have
to take the rest of the world into account in certain limited ways; our
perceptions of food, mates, rivals, and leopards, for example, have to
correspond to the equivalent factors in the environment; but it’s actually an
advantage to any organism to screen out anything that doesn’t relate to
immediate benefits or threats, so that adequate attention can be paid to the
things that matter. We perceive colors, which most mammals don’t, because
primates need to be able to judge the ripeness of fruit from a distance; we
don’t perceive the polarization of light, as bees do, because primates don’t
need to navigate by the angle of the sun.
What’s more, the basic mental categories we use to make
sense of the tiny fraction of our surroundings that we perceive are just as
much a product of our primate ancestry as the senses we have and don’t have.
That includes the basic structures of human language, which most research
suggests are inborn in our species, as well as such derivations from language
as logic and the relation between cause and effect—this latter simply takes the
grammatical relation between subjects, verbs, and objects, and projects it onto
the nonlinguistic world. In the real world, every phenomenon is part of an
ongoing cascade of interactions so wildly hypercomplex that labels like “cause”
and “effect” are hopelessly simplistic; what’s more, a great many things—for
example, the decay of radioactive nuclei—just up and happen randomly without
being triggered by any specific cause at all. We simplify all this into cause
and effect because just enough things appear to work that way to make the habit
useful to us.
Another thing that has much more to do with our cognitive
apparatus than with the world we perceive is number. Does one apple plus one
apple equal two apples? In our number-using minds, yes; in the real world, it
depends entirely on the size and condition of the apples in question. We
convert qualities into quantities because quantities are easier for us to think
with. That was one of the core
discoveries that kickstarted the scientific revolution; when Galileo became the
first human being in history to think of speed as a quantity, he made it
possible for everyone after him to get their minds around the concept of
velocity in a way that people before him had never quite been able to do.
In physics, converting qualities to quantities works very,
very well. In some other sciences, the same thing is true, though the further
you go away from the exquisite simplicity of masses in motion, the harder it is
to translate everything that matters into quantitative terms, and the more
inevitably gets left out of the resulting theories. By and large, the more
complex the phenomena under discussion, the less useful quantitative models
are. Not coincidentally, the more complex the phenomena under discussion, the
harder it is to control all the variables in play—the essential step in using the
scientific method—and the more tentative, fragile, and dubious the models that
result.
So when we try to figure out what bottlenosed porpoises are
saying to each other, we’re facing what’s probably an insuperable barrier. All
our notions of language are social-primate notions, shaped by the peculiar mix
of neurology and hardwired psychology that proved most useful to bipedal apes
on the East African savannah over the last few million years. The structures
that shape porpoise speech, in turn, are social-cetacean notions, shaped by the
utterly different mix of neurology and hardwired psychology that’s most useful
if you happen to be a bottlenosed porpoise or one of its ancestors.
Mind you, porpoises and humans are at least fellow-mammals,
and likely have common ancestors only a couple of hundred million years back.
If you want to talk to a gray parrot, you’re trying to cross a much vaster
evolutionary distance, since the ancestors of our therapsid forebears and the
ancestors of the parrot’s archosaurian progenitors have been following
divergent tracks since way back in the Paleozoic. Since language evolved
independently in each of the lineages we’re discussing, the logic of convergent
evolution comes into play: as with the eyes of vertebrates and cephalopods—another
classic case of the same thing appearing in very different evolutionary
lineages—the functions are similar but the underlying structure is very
different. Thus it’s no surprise that it’s taken exhaustive computer analyses
of porpoise and parrot vocalizations just to give us a clue that they’re using
language too.
The takeaway point I hope my readers have grasped from this
is that the human mind doesn’t know universal, objective truths. Our thoughts
are simply the way that we, as members of a particular species of social
primates, to like to sort out the universe into chunks simple enough for us to
think with. Does that make human thought useless or irrelevant? Of course not;
it simply means that its uses and relevance are as limited as everything else
about our species—and, of course, every other species as well. If any of my
readers see this as belittling humanity, I’d like to suggest that fatuous
delusions of intellectual omnipotence aren’t a useful habit for any species,
least of all ours. I’d also point out that those very delusions have played a
huge role in landing us in the rising spiral of crises we’re in today.
Human beings are simply one species among many, inhabiting
part of the earth at one point in its long lifespan. We’ve got remarkable
gifts, but then so does every other living thing. We’re not the masters of the
planet, the crown of evolution, the fulfillment of Earth’s destiny, or any of
the other self-important hogwash with which we like to tickle our collective
ego, and our attempt to act out those delusional roles with the help of a lot
of fossil carbon hasn’t exactly turned out well, you must admit. I know some
people find it unbearable to see our species deprived of its supposed place as
the precious darlings of the cosmos, but that’s just one of life’s little
learning experiences, isn’t it? Most of us make a similar discovery on the
individual scale in the course of growing up, and from my perspective, it’s
high time that humanity do a little growing up of its own, ditch the infantile
egotism, and get to work making the most of the time we have on this beautiful
and fragile planet.
The recognition that there’s a middle ground between
omnipotence and uselessness, though, seems to be very hard for a lot of people
to grasp just now. I don’t know if other bloggers in the doomosphere have this
happen to them, but every few months or so I field a flurry of attempted
comments by people who want to drag the conversation over to their conviction
that free will doesn’t exist. I don’t put those comments through, and not just
because they’re invariably off topic; the ideology they’re pushing is, to my
way of thinking, frankly poisonous, and it’s also based on a shopworn Victorian
determinism that got chucked by working scientists rather more than a century
ago, but is still being recycled by too many people who didn’t hear the thump
when it landed in the trash can of dead theories.
A century and a half ago, it used to be a commonplace of
scientific ideology that cause and effect ruled everything, and the whole
universe was fated to rumble along a rigidly invariant sequence of events from
the beginning of time to the end thereof. The claim was quite commonly made
that a sufficiently vast intelligence, provided with a sufficiently complete data
set about the position and velocity of every particle in the cosmos at one
point in time, could literally predict everything that would ever happen
thereafter. The logic behind that claim went right out the window, though, once
experiments in the early 20th century showed conclusively that quantum
phenomena are random in the strictest sense of the world. They’re not caused by
some hidden variable; they just happen when they happen, by chance.
What determines the moment when a given atom of an unstable
isotope will throw off some radiation and turn into a different element? Pure
dumb luck. Since radiation discharges from single atoms of unstable isotopes
are the most important cause of genetic mutations, and thus a core driving
force behind the process of evolution, this is much more important than it
looks. The stray radiation that gave you your eye color, dealt an otherwise
uninteresting species of lobefin fish the adaptations that made it the ancestor
of all land vertebrates, and provided the raw material for countless other
evolutionary transformations: these were
entirely random events, and would have happened differently if certain unstable
atoms had decayed at a different moment and sent their radiation into a
different ovum or spermatozoon—as they very well could have. So it doesn’t
matter how vast the intelligence or complete the data set you’ve got, the
course of life on earth is inherently impossible to predict, and so are a great
many other things that unfold from it.
With the gibbering phantom of determinism laid to rest, we
can proceed to the question of free will. We can define free will operationally
as the ability to produce genuine novelty in behavior—that is, to do things
that can’t be predicted. Human beings do this all the time, and there are very
good evolutionary reasons why they should have that capacity. Any of my readers
who know game theory will recall that the best strategy in any competitive game
includes an element of randomness, which prevents the other side from anticipating
and forestalling your side’s actions. Food gathering, in game theory terms, is
a competitive game; so are trying to attract a mate, competing for social
prestige, staying out of the jaws of hungry leopards, and most of the other
activities that pack the day planners of social primates.
Unpredictability is so highly valued by our species, in
fact, that every human culture ever recorded has worked out formal ways to
increase the total amount of sheer randomness guiding human action. Yes, we’re
talking about divination—for those who don’t know the jargon, this term refers
to what you do with Tarot cards, the I Ching, tea leaves, horoscopes, and all
the myriad other ways human cultures have worked out to take a snapshot of the
nonrational as a guide for action. Aside from whatever else may be involved—a
point that isn’t relevant to this blog—divination does a really first-rate job
of generating unpredictability. Flipping a coin does the same thing, and most
people have confounded the determinists by doing just that on occasion, but
fully developed divination systems like those just named provide a much richer
palette of choices than the simple coin toss, and thus enable people to
introduce a much richer range of novelty into their actions.
Still, divination is a crutch, or at best a supplement;
human beings have their own onboard novelty generators, which can do the job
all by themselves if given half a chance.
The process involved here was understood by philosophers a long time
ago, and no doubt the neurologists will get around to figuring it out one of
these days as well. The core of it is that humans don’t respond directly to
stimuli, external or internal. Instead,
they respond to their own mental representations of stimuli, which are
constructed by the act of cognition and are laced with bucketloads of
extraneous material garnered from memory and linked to the stimulus in uniquely
personal, irrational, even whimsical ways, following loose and wildly
unpredictable cascades of association and contiguity that have nothing to do
with logic and everything to do with the roots of creativity.
Each human society tries to give its children some
approximation of its own culturally defined set of representations—that’s
what’s going on when children learn language, pick up the customs of their
community, ask for the same bedtime story to be read to them for the umpteenth
time, and so on. Those culturally defined representations proceed to interact
in various ways with the inborn, genetically defined representations that get
handed out for free with each brand new human nervous system. The existence of these biologically and
culturally defined representations, and of various ways that they can be
manipulated to some extent by other people with or without the benefit of mass
media, make up the ostensible reason why the people mentioned above insist that
free will doesn’t exist.
Here again, though, the fact that the human mind isn’t
omnipotent doesn’t make it powerless. Think about what happens, say, when a
straight stick is thrust into water at an angle, and the stick seems to pick up
a sudden bend at the water’s surface, due to differential refraction in water
and air. The illusion is as clear as anything, but if you show this to a child
and let the child experiment with it, you can watch the representation “the
stick is bent” give way to “the stick looks bent.” Notice what’s
happening here: the stimulus remains the same, but the representation changes,
and so do the actions that result from it. That’s a simple example of how
representations create the possibility of freedom.
In the same way, when the media spouts some absurd bit of
manipulative hogwash, if you take the time to think about it, you can watch
your own representation shift from “that guy’s having an orgasm from slurping
that fizzy brown sugar water” to “that guy’s being paid to pretend to have an
orgasm, so somebody can try to convince me to buy that fizzy brown sugar
water.” If you really pay attention, it may shift again to “why am I wasting my
time watching this guy pretend to get an orgasm from fizzy brown sugar water?”
and may even lead you to chuck your television out a second story window into
an open dumpster, as I did to the last one I ever owned. (The flash and bang
when the picture tube imploded, by the way, was far more entertaining than
anything that had ever appeared on the screen.)