The Monkeywrench Wars


Among science fiction author Arthur C. Clarke’s many gifts was a mordant sense of humor, and a prime example of that gift in action was his 1951 short story Superiority. It’s the story of a space war told by the commanding general of the losing side; he is explaining to some interstellar equivalent of the Nuremberg war crimes tribunal how his forces managed to lose. 

The question is of some interest, as the space fleets and resources of the losing side were far superior to those of the victors. So, however, was their technology.  "However" is the operative word, for each brilliantly innovative wonder weapon fielded by their scientists turned out to have disastrous downsides when put into service, while the winning side simply kept on churning out unimaginative space battleships using old but proven technology.  By the time the losing side realized that it should have done the same thing, it was so far behind that only a new round of wonder weapons seemed to offer any hope of victory—and a little more of that same logic finished them off.

It’s been suggested by more than one wit that life imitates art far more often than art imitates life. The United States military these days seems intent on becoming a poster child for that proposal. Industrial design classes at MIT used to hand out copies of "Superiority" as required reading; unfortunately that useful habit has not been copied by the Pentagon, and as a result, the US armed forces are bristling with brilliantly innovative wonder weapons that don’t do what they’re supposed to do.

The much-ballyhooed Predator drone is one good example among many.  For those who don’t follow military technology, it’s a remote-controlled plane designed to fly at rooftop level, equipped with a TV camera and missiles.  The operator, sitting in an air-conditioned office building in Nevada, can control it anywhere on Earth via satellite uplink, seek out suspected terrorists, and vaporize them.  Does it work?  Well, it’s vaporized quite a few people; the Obama administration is even more drone-happy than its feckless predecessor, and has been sending swarms of drones around various corners of the Middle East to fire missiles at a great many suspected terrorists.

You’ll notice that this has done little to stabilize the puppet governments we’ve got in the Middle East these days, and even less to decrease the rate at which American soldiers are getting shot and blown up in Afghanistan.  There’s a reason for that.  The targets for drone attacks have to be selected by ordinary intelligence methods—terrorists don’t go around with little homing beacons on them, you know—and ordinary intelligence methods have a relatively low signal-to-noise ratio.  As a result, a lot of wedding parties and ordinary households get vaporized on the suspicion that there might be a terrorist hiding in there somewhere.  Since tribal custom in large parts of the Middle East makes blood vengeance on the murderers of one’s family members an imperative duty, and there are all these American soldiers conveniently stationed in Afghanistan—well, you can do the math for yourself.

Thus the Predator drone isn’t a war-fighting technology, it’s a war-losing technology, pursued with ever-increasing desperation by a military and political establishment that has no idea what to do but can’t bear the thought of doing nothing.  The same logic drove the policy of torture so disingenuously defended by the Bush administration.  (Yes, waterboarding is torture. Anyone who wishes to disagree is welcome to undergo the procedure themselves and then offer an informed opinion.) Beyond the moral issues, there’s a practical point that’s far from minor:  torture doesn’t work.  It’s not an effective way of extracting accurate information from prisoners; it’s an effective way of making prisoners say what the torturer wants to hear—I recall the comment of the elderly Knight Templar, after his session on the rack, that in order to get his torturers to stop he would readily have confessed to murdering God. 

Thus torture is another war-losing technology.  Technically speaking, it’s a good way to maximize confirmation bias, which is what cognitive psychologists call the habit of looking for evidence that supports your presuppositions rather than testing those presuppositions against the real world.  It appeals powerfully to the sort of squeaky-voiced machismo that played so large a role in the Bush administration’s foreign policy, but wars are not won by imposing one’s own delusions on a global battlefield; they’re won by figuring out what’s out there in the world, and responding to it.

They’re also won by remembering that what’s out there in the world is also responding to you.  To grasp how this works, it’s going to be necessary to talk about systems again—specifically, about the three ways a system can mess you over.

There may be official names for these somewhere, but I’ve taken the liberty of borrowing terminology from Discordianism, and calling them chaos, discord, and confusion. For a timely example of chaos, it’s hard to do better than Tropical Storm Isaac, which is churning its way into the eastern Caribbean as I write this. As systems go, a tropical storm is a fairly simple one—basically, a heat engine in which all the moving parts are made of air and water, with a few feedback loops linking it to its environment.  Those loops are what make it chaotic; a tropical storm’s behavior is determined by its environment, but its environment is constantly being reshaped by the tropical storm, so that perturbations too small to track or anticipate can spin out of control and drive major shifts in size, speed and direction.

Thus you can never know exactly where a tropical storm is going to go, or how hard it’s going to hit. The most you can know is where, on average, storms like the one you’re watching have tended to go, and what they’ve done when they got there.  That’s chaos:  unpredictability because the other system’s interactions with its environment are too complex to be accurately anticipated.

If we shift attention from Tropical Storm Isaac to the latest recall of bacteria-tainted produce, we move from chaos to discord.  Individually, bacteria are nearly as dumb as storms, but a species of bacteria taken as a whole has a curious analogue to intelligence.  All living systems are value-oriented—that is, they value some states (such as staying alive) more than other states (such as becoming dead) and take actions to bring about the states they value.  That makes them considerably more challenging to deal with than storms, because they take active steps to counter any change that threatens their survival.

That’s the factor that drives the evolution of antibiotic resistance in bacteria, for example. Successful microbe species maintain a constant pressure on their ecological boundaries via genetic variation.  The DNA dice are constantly rolling, and it doesn’t matter if the odds against the combination of genes they need to survive in an antibiotic-rich environment  are in the millions-to-one range; as long as they aren’t driven to extinction, they’ll roll boxcars sooner or later.  That’s discord:  unpredictability because the other system is constantly modifying its own behavior to pursue values that conflict with yours.

Compare bacterial evolution to the behavior of a tropical storm and the difference between chaos and discord is easy to grasp.  Tropical storms aren’t value-oriented; they simply respond in complicated ways to subtle changes in environmental conditions they themselves play a part in causing.  Imagine, though, a tropical storm that started seeking out patches of warm water and moving away from wind shear, so it could prolong its own existence and increase in strength.  That’s what all living things do, from bacteria to readers of The Archdruid Report.  Tropical storms don’t, which is a good thing; there would be a lot more cataclysmic hurricanes if they did.

To go to the next level, let’s imagine an ecosystem of living tropical storms: seeking out the warm water that feeds them, dodging the wind shear that can kill them, and competing against other storms.  That’s all in the realm of discord.  Imagine, though, that a storm that achieves hurricane status becomes conscious and capable of abstract thought.  It can think about the future and make plans.  It becomes aware of other hurricanes, and realizes that those other hurricanes can frustrate its plans if they can figure out the plans in time to do something about them. The result is confusion:  uncertainty because the other system is deliberately trying to fool you.

It’s crucial to grasp that what I’ve called chaos, discord, and confusion are fundamentally different kinds of uncertainty, and the tricks that will help you deal with one will blow up in your face if you apply them to the others. Statistical analysis, for instance, can give you a handle on a chaotic system:  meteorologists trying to predict the movements of a storm can study the trajectories of past storms and get a good idea of where the storm is most likely to go. Apply that to bacteria, and you’ll be blindsided sooner or later, because the bacteria are constantly generating genetic novelty and thus shifting the baseline on which the statistics rely.  Apply it to an enemy in war, and you’ve made a lethal mistake; once your enemy figures out what you’re expecting, they’ll play along to lull you into a sense of false security, and then come out of the blue and stomp you.

This bit of systems theory is relevant here because American culture has a very hard time dealing with any kind of uncertainty at all. That’s partly the legacy of Newtonian science, which saw itself—or at least liked to portray itself in public—as the quest for absolutely invariant laws of nature.  If X occurs, then Y must occur:  that sort of statement is the paradigmatic form of knowledge in industrial societies.  One of the great scientific achievements of the 20th century was the expansion of science into fields that can only be known statistically—quantum mechanics, meteorology, ecology, and more. Even there, though, the lure of the supposedly invariant has been a constant source of trouble, while those fields that routinely throw discord and confusion at the researcher are by and large the fields that have remained stubbornly resistant to scientific inquiry and technological control.

It also explains a good bit of why the United States has stumbled from one failed counterinsurgency after another since the Second World War.  There’s more to it than that—I’ll explain next week why the American way of war guarantees that any country invaded and occupied by the United States is sure to pup an insurgency in short order—but the American military fixation on certainty and control, part and parcel of the broader American obsession with these notions, has gone a long way to guarantee the litany of failures. You can’t treat a hostile country like a passive object that will respond predictably to your actions.  You can’t even treat it as a chaotic system that can more or less be known statistically. At the very least, you have to recognize that it will behave as a discordant system, and react to your actions in ways that support its values, not yours: for example, by shooting or blowing up randomly chosen American soldiers to avenge family members killed by a Predator drone.

Still, it’s crucial to be aware of the possibility of the third level of uncertainty, the one that I’ve called confusion. Any hypothesis you come up with, if it becomes known or even suspected by the enemy, becomes a tool he can use to clobber you.  The highly political and thus embarrassingly public nature of American military doctrine and strategy pretty much guarantees that this will happen—does anyone really believe, for example, that the Taliban weren’t reading online news stories about the upcoming American "surge" for months before it happened, and combining that with information from a global network of intelligence sources to get a very clear picture of what was coming and how to deal with it?

So far, the consequences of confusion have been limited, because the United States has been careful to pick on nations that couldn’t fight back.  We could pound the rubble in Vietnam and Iraq, invade Panama and Grenada, and stage revolutions in Libya and a bunch of post-Communist nations, because we knew perfectly well that the worst they could do in response was kill a bunch of American soldiers. Several trends, though, suggest that this period of relative safety may be coming to an end. 

The spread of digital technology is part of it—the ease with which Iraqi insurgents figured out how to use cell phones to trigger roadside bombs is only the first foreshock of a likely tectonic shift in warfare, as DIY electronics meets DIY weapons engineering to produce cheap, homemade equivalents of smart bombs and Predator drones.  The United States’ increasing dependence on the rest of the world is another part—the number of soft targets that, if destroyed, would deal a punishing blow to America’s economy has soared in recent years, and a great many of those targets are scattered around the world, readily accessible to those with a grudge and a van full of fertilizer. Still, there’s a third factor, and it’s a function of the increasingly integrated and highly technological American military machine.

As the most gizmocentric culture in recorded history, America was probably destined from the start to end up with a military system in which most uniformed personnel operate machinery, and every detail of making war involves a galaxy of high-tech devices.  The machines and devices have been so completely integrated into military operations that they are necessities, not conveniences; I’ve been quietly informed by several people in the militaries of the US and its allies that a failure of the GPS satellite system, for example, would cripple the ability of a US military force to do much of anything. It’s far from the only such vulnerability.  Today’s US military is tightly integrated with a global technological infrastructure of fantastic complexity.  That structure is immensely powerful and efficient...but longtime readers of this blog will recall that efficiency is the opposite of resilience.

That’s why I discussed the abrupt termination of Bronze Age chariot warfare by javelin-throwing raiders in last week’s post.  If you have to fight an enemy armed with an extremely efficient military technology, one of the most likely ways to win is to find and target some previously unexploited weakness in the technology itself.  Complex as they were by the standards of the time, chariots had a very modest number of vulnerabilities, one of which the Sea Peoples attacked and exploited. By contrast, the hypercomplex American military machine is riddled with potential vulnerabilities—weak points that a hostile force might be able to monkeywrench in some unexpected way.

Surely, you may be thinking by this point, the Pentagon is thinking about this as well. No doubt they are, but the famous military penchant for endlessly refighting the last really successful war and the tendency for weapons systems to develop political constituencies that keep them in service long after they’re obsolete militate against a meaningful response.  US military planners in recent decades have followed the lead of the sciences to embrace the form of uncertainty I’ve called chaos, and so you get plenty of scenarios of future war that extrapolate current trends out fifteen or fifty years, with a few new bits of gosh-wow technology and a large role reserved for weapons systems such as carriers with constituencies that have enough clout.  The thought that hostile forces may be evolving resistance to our military equivalent of antibiotics rarely gets a look in, and the thought that at least some of those hostile forces may be reading those same scenarios and brainstorming ways to toss a monkeywrench into the machinery—well, let’s just say that making such suggestions will be about as helpful for the career of a military officer today as the same habit was for Col. Billy Mitchell back in the day.

This is one reason why I have come to believe that of the shocks that could cause the US empire to collapse, one of the most likely is a disastrous and unexpected military defeat.  At this point, very nearly the only thing that maintains US power, and the disproportionate share of the world’s wealth that is the payoff of that power, is our eagerness to pound the bejesus out of Third World nations at the drop of a hat.  If we lose that capacity, we could end up neck deep in kim chee very quickly indeed.

****************

End of the World of the Week #36

There’s a tendency to assume that people who buy into end-of-the-world prophecies are, shall we say, a couple of horsemen short of an apocalypse. History, though, shows that it’s entirely possible to be very bright and still buy into the apocalypse meme. Among the leading examples is the redoubtable John Napier (1550-1617), who combined a fascination with apocalyptic prophecy with a well-earned reputation as one of the great mathematicians of all time.

Does that latter description sound exaggerated?  It isn’t.  This is the man who invented the decimal point. Oh, and logarithms. Not to mention one of the first practical mechanical calculating devices, the once-famous Napier’s Bones. We won’t even get into his remarkable innovations in spherical trigonometry. The point that matters for this discussion is his very careful, elegant, mathematically exact calculations of the end of the world.

Like nearly everyone in 16th-century Scotland, he took the Book of Revelations seriously, and turned the same penetrating intellect on its mysteries that he used with better results on the properties of numbers. After years of careful calculation, he determined that the Second Coming of Christ would occur either in 1688 or in 1700.  Fortunately, Napier’s reputation rests on more durable grounds, for both years passed without the faintest sound of Gabriel’s trumpet.

—for more failed end time prophecies, see my book Apocalypse Not