The Clausewitz Homepage

A Symposium co-sponsored by NDU and the RAND Corporation, NOV 1996. The conference papers were published in David S. Alberts and Thomas J. Czerwinski, eds., Complexity, Global Politics, and National Security (Washington, DC: National Defense University, 1997).

 
"The Simple and the Complex"

Murray Gell-Mann


MURRAY GELL-MANN is a Nobel Laureate in Physics. He is a professor at the Santa Fe Institute and co-chairman of the Science Board. He is also affiliated with the Los Alamos National Laboratory and the University of New Mexico. He is the author of numerous publications, including the best-seller The Quark and the Jaguar, an introduction to Complexity for the general reader. This page is reproduced from URL http://www.ndu.edu/inss/books/books%20-%201998/Complexity,%20Global%20Politics%20and%20Nat'l%20Sec%20-%20Sept%2098/ch01.html

It is a pleasure, as well as an honor, to give the opening talk at this conference on Complexity, Global Politics, and National Security. I am glad to be paying my first visit to the National Defense University. As to the other sponsoring institution, I am no stranger to it. In fact, it is just forty years since I first became a RAND consultant. Now both organizations have become interested in such concepts as chaos and complexity, and I am delighted to have the opportunity to discuss them here.

At the Santa Fe Institute, which I helped to found and where I now work, we devote ourselves to studying, from many different points of view, the transdisciplinary subject that includes the meanings of simplicity and complexity, the ways in which complexity arises from fundamental simplicity, and the behavior of complex adaptive systems, along with the features that distinguish them from non-adaptive systems.

My name for that subject is plectics, derived from the Greek word plektós for "twisted" or "braided," cognate with the principal root of Latin complexus, originally "braided together," from which the English word complexity is derived. The word plektós is also related, more distantly, to the principal root of Latin simplex, originally "once folded," which gave rise to the English word simplicity. The name plectics thus reflects the fact that we are dealing with both simplicity and complexity.

I believe my task this morning is to throw some light on plectics and to indicate briefly how it may be connected with questions of national and global security, especially when the term "security" is interpreted rather broadly. We can begin with questions such as these:

  • What do we usually mean by complexity?
  • What is chaos?
  • What is a complex adaptive system?
  • Why is there a tendency for more and more complex entities to appear as time goes on?
It would take a number of quantities, differently defined, to cover all our intuitive notions of the meaning of complexity and of its opposite, simplicity. Also, each quantity would be somewhat context-dependent. In other words, complexity, however defined, is not entirely an intrinsic property of the entity described; it also depends to some extent on who or what is doing the describing.

Let us start with a rather naïvely defined quantity, which I call "crude complexity"—the length of the shortest message describing the entity. First of all, we would have to exclude pointing at the entity or calling it by a special name; something that is obviously very complex could be given a short nickname like Heinz or Zbig, but giving it that name would not make it simple. Next, we must understand that crude complexity will depend on the level of detail at which the entity is being described, what we call in physics the coarse graining. Also, the language employed will affect the minimum length of the description. That minimum length will depend, too, on the knowledge and understanding of the world that is assumed: the description of a rhinoceros can be abbreviated if it is already known what a mammal is.

Having listed these various kinds of context dependence, we can concentrate on the main feature of crude complexity, that it refers to length of the shortest message. In my book, The Quark and the Jaguar, I tell the story of the elementary school teacher who assigned to her class a three hundred-word essay, to be written over the weekend, on any topic. One pupil did what I used to do as a child—he spent the weekend poking around outdoors and then scribbled something hastily on Monday morning. Here is what he wrote:

"Yesterday the neighbors had a fire in their kitchen and I leaned out of the window and yelled ‘Fire! Fire! Fire! Fire!...’" If he had not had to comply with the three hundred word requirement, he could have written instead "...I leaned out of the window and yelled ‘Fire!’ 282 times."
It is this notion of compression that is crucial.

Now in place of crude complexity we can consider a more technically defined quantity, algorithmic information content. An entity is described at a given level of detail, in a given language, assuming a given knowledge and understanding of the world, and the description is reduced by coding in some standard manner to a string of bits (zeroes and ones). We then consider all programs that will cause a standard universal computer to print out that string of bits and then stop computing. The length of the shortest such program is called the algorithmic information content (AIC). This is a well-known quantity introduced over thirty years ago by the famous Russian mathematician Kolmogorov and by two Americans, Gregory Chaitin and Ray Solomonoff, all working independently. We see, by the way, that it involves some additional context dependence through the choice of the coding procedure and of the universal computer. Because of the context dependence, AIC is most useful for comparison between two strings, at least one of which has a large value of it.

A string consisting of the first two million bits of pi has a low AIC because it is highly compressible: the shortest program just has to give a prescription for calculating pi and ask that the string be cut off after two million entries. But many long strings of bits are incompressible. For those strings, the shortest program is one that lists the whole string and tells the machine to print it out and then halt. Thus, for a given length of string, an incompressible one has the largest possible AIC. Such a string is called a "random" one, and accordingly the quantity AIC is sometimes called algorithmic randomness.

We can now see why AIC does not correspond very well to what we usually mean by complexity. Compare a play by Shakespeare with the typical product, of equal length, of the proverbial ape at the typewriter, who types every letter with equal probability. The AIC, or algorithmic randomness, of the latter is much greater than that of the former. But it is absurd to say that the ape has produced something more complex than the work of Shakespeare. Randomness is not what we mean by complexity.

Instead, let us define what I call effective complexity, the AIC of the regularities of an entity, as opposed to its incidental features. A random (incompressible) bit string has no regularities (except its length) and very little effective complexity. Likewise something extremely regular, such as a bit string consisting entirely of ones, will also have very little effective complexity, because its regularities can be described very briefly. To achieve high effective complexity, an entity must have intermediate AIC and obey a set of rules requiring a long description. But that is just what we mean when we say that the grammar of a certain language is complex, or that a certain conglomerate corporation is a complex organization, or that the plot of a novel is very complex—we mean that the description of the regularities takes a long time.

The famous computer scientist, psychologist, and economist Herbert Simon used to call attention to the path of an ant, which has a high AIC and appears complex at first sight. But when we realize that the ant is following a rather simple program, into which are fed the incidental features of the landscape and the pheromone trails laid down by the other ants for the transport of food, we understand that the path is fundamentally not very complex. Herb says, "I got a lot of mileage out of that ant." And now it is helping me to illustrate the difference between crude and effective complexity.

There can be no finite procedure for finding all the regularities of an entity. We may ask, then, what kinds of things engage in identifying sets of regularities. The answer is: complex adaptive systems, including all living organisms on Earth.

A complex adaptive system receives a stream of data about itself and its surroundings. In that stream, it identifies particular regularities and compresses them into a concise "schema," one of many possible ones related by mutation or substitution. In the presence of further data from the stream, the schema can supply descriptions of certain aspects of the real world, predictions of events that are to happen in the real world, and prescriptions for behavior of the complex adaptive system in the real world. In all these cases, there are real world consequences: the descriptions can turn out to be more accurate or less accurate, the predictions can turn out to be more reliable or less reliable, and the prescriptions for behavior can turn out to lead to favorable or unfavorable outcomes. All these consequences then feed back to exert "selection pressures" on the competition among various schemata, so that there is a strong tendency for more successful schemata to survive and for less successful ones to disappear or at least to be demoted in some sense.

Take the human scientific enterprise as an example. The schemata are theories. A theory in science compresses into a brief law (say a set of equations) the regularities in a vast, even indefinitely large body of data. Maxwell’s equations, for instance, yield the electric and magnetic fields in any region of the universe if the special circumstances there—electric charges and currents and boundary conditions—are specified. (We see how the schema plus additional information from the data stream leads to a description or prediction.)

In biological evolution, the schemata are genotypes. The genotype, together with all the additional information supplied by the process of development—for higher animals, from the sperm and egg to the adult organism—determines the character, the "phenotype," of the individual adult. Survival to adulthood of that individual, sexual selection, and success or failure in producing surviving progeny all exert selection pressures on the competition of genotypes, since they affect the transmission to future generations of genotypes resembling that of the individual in question.

In the case of societal evolution, the schemata consist of laws, customs, myths, traditions, and so forth. The pieces of such a schema are often called "memes," a term introduced by Richard Dawkins by analogy with genes in the case of biological evolution.

For a business firm, strategies and practices form the schemata. In the presence of day-to-day events, a schema affects the success of the firm, as measured by return to the stockholders in the form of dividends and share prices. The results feed back to affect whether the schema is retained or a different one substituted (often under a new CEO).

A complex adaptive system (CAS) may be an integral part of another CAS, or it may be a loose aggregation of complex adaptive systems, forming a composite CAS. Thus a CAS has a tendency to give rise to others.

On Earth, all complex adaptive systems seem to have some connection with life. To begin with, there was the set of prebiotic chemical reactions that gave rise to the earliest life. Then the process of biological evolution, as we have indicated, is an example of a CAS. Likewise each living organism is a CAS. In a mammal, such as a human being, the immune system is a complex adaptive system too. Its operation is something like that of biological evolution, but on a much faster time scale. (If it took hundreds of thousands of years for us to develop antibodies to invading microbes, we would be in serious trouble.) The process of learning and thinking in a human individual is also a complex adaptive system. In fact, the term "schema" is taken from psychology, where it refers to a pattern used by the mind to grasp an aspect of reality. Aggregations of human beings can also be complex adaptive systems, as we have seen: societies, business firms, the scientific enterprise, and so forth.

Nowadays, we have computer-based complex adaptive systems, such as "neural nets" and "genetic algorithms." While they may sometimes involve new, dedicated hardware, they are usually implemented on conventional hardware with special software. Their only connection with life is that they were developed by human beings. Once they are put into operation, they can, for example, invent new strategies for winning at games that no human being has ever discovered.

Science fiction writers and others may speculate that in the distant future a new kind of complex adaptive system might be created, a truly composite human being, by wiring together the brains of a number of people. They would communicate not through language, which Voltaire is supposed to have said is used by men to conceal their thoughts, but through sharing all their mental processes. My friend Shirley Hufstedler says she would not recommend this procedure to couples about to be married.

The behavior of a complex adaptive system, with its variable schemata undergoing evolution through selection pressures from the real world, may be contrasted with "simple" or "direct" adaptation, which does not involve a variable schema, but utilizes instead a fixed pattern of response to external changes. A good example of direct adaptation is the operation of a thermostat, which simply turns on the heat when the temperature rises above a fixed value and turns it off when the temperature falls below the same value.

In the study of a human organization, such as a tribal society or a business firm, one may encounter at least three different levels of adaptation, on three different time scales.

1) On a short time scale, we may see a prevailing schema prescribing that the organization react to particular external changes in specified ways; as long as that schema is fixed, we are dealing with direct adaptation.

2) On a longer time scale, the real world consequences of a prevailing schema (in the presence of events that occur) exert selection pressures on the competition of schemata and may result in the replacement of one schema by another.

3) On a still longer time scale, we may witness the disappearance of some organizations and the survival of others, in a Darwinian process. The evolution of schemata was inadequate in the former cases, but adequate in the latter cases, to cope with the changes in circumstances.

It is worth making the elementary point about the existence of these levels of adaptation because they are often confused with one another. As an example of the three levels, we might consider a prehistoric society in the U.S. Southwest that had the custom (1) of moving to higher elevations in times of unusual heat and drought. In the event of failure of this pattern, the society might try alternative schemata (2) such as planting different crops or constructing an irrigation system using water from far away. In the event of failure of all the schemata that are tried, the society may disappear (3), say with some members dying and the rest dispersed among other societies that survive. We see that in many cases failure to cope can be viewed in terms of the evolutionary process not being able to keep pace with change.

Individual human beings in a large organization or society must be treated by the historical sciences as playing a dual role. To some extent they can be regarded statistically, as units in a system. But in many cases a particular person must be treated as an individual, with a personal influence on history. Those historians who tolerate discussion of contingent history (meaning counterfactual histories in addition to the history we experience) have long argued about the extent to which broad historical forces eventually "heal" many of the changes caused by individual achievements—including negative ones, such as assassinations.

A history of the U.S. Constitutional Convention of 1787 may make much of the conflicting interests of small states and large states, slave states and free states, debtors and creditors, agricultural and urban populations, and so forth. But the compromises invented by particular individuals and the role that such individuals played in the eventual ratification of the Constitution would also be stressed. The outcome could have been different if certain particular people had died in an epidemic just before the Convention, even though the big issues would have been the same.

How do we think about alternative histories? Is the notion of alternative histories a fundamental concept?The fundamental laws of nature are:

(1) the dynamical law of the elementary particles—the building blocks of all matter— along with their interactions and

(2) the initial condition of the universe near the beginning of its expansion some ten billion years ago.

Theoretical physicists seem to be approaching a real understanding of the first of these laws, as well as gaining some inklings about the second one. It may well be that both are rather simple and knowable, but even if we learn what they are, that would not permit us, even in principle, to calculate the history of the universe. The reason is that fundamental theory is probabilistic in character (contrary to what one might have thought a century ago). The theory, even if perfectly known, predicts not one history of the universe but probabilities for a huge array of alternative histories, which we may conceive as forming a branching tree, with probabilities at all the branchings. In a short story by the great Argentine writer Jorge Luis Borges, a character creates a model of these branching histories in the form of a garden of forking paths.

The particular history we experience is co-determined, then, by the fundamental laws and by an inconceivably long sequence of chance events, each of which could turn out in various ways. This fundamental indeterminacy is exacerbated for any observer—or set of observers, such as the human race—by ignorance of the outcomes of most of the chance events that have already occurred, since only a very limited set of observations is available. Any observer sees only an extremely coarse-grained history.

The phenomenon of chaos in certain nonlinear systems is a very sensitive dependence of the outcome of a process on tiny details of what happened earlier. When chaos is present, it still further amplifies the indeterminacy we have been discussing.

Last year, at the wonderful science museum in Barcelona, I saw an exhibit that beautifully illustrated chaos. A nonlinear version of a pendulum was set up so that the visitor could hold the bob and start it out in a chosen position and with a chosen velocity. One could then watch the subsequent motion, which was also recorded with a pen on a sheet of paper. The visitor was then invited to seize the bob again and try to imitate exactly the previous initial position and velocity. No matter how carefully that was done, the subsequent motion was quite different from what it was the first time. Comparing the records on paper confirmed the difference in a striking way.

I asked the museum director what the two men were doing who were standing in a corner watching us. He replied, "Oh, those are two Dutchmen waiting to take away the chaos." Apparently, the exhibit was about to be dismantled and taken to Amsterdam. But I have wondered ever since whether the services of those two Dutchmen would not be in great demand across the globe, by organizations that wanted their chaos taken away.

Once we view alternative histories as forming a branching tree, with the history we experience co-determined by the fundamental laws and a huge number of accidents, we can ponder the accidents that gave rise to the people assembled in this room. A fluctuation many billions of years ago produced our galaxy, and it was followed by the accidents that contributed to the formation of the solar system, including the planet Earth. Then there were the accidents that led to the appearance of the first life on this planet, and the very many additional accidents that, along with natural selection, have shaped the course of biological evolution, including the characteristics of our own subspecies, which we call, somewhat optimistically, Homo sapiens. Finally we may consider the accidents of genetics and sexual selection that helped to produce the genotypes of all the individuals here, and the accidents in the womb, in childhood, and since that have helped to make us what we are today.

Now most accidents in the history of the universe don’t make much difference to the coarse-grained histories with which we are concerned. If two oxygen molecules in the atmosphere collide and then go off in one pair of directions or another, it usually makes no difference. But the fluctuation that produced our galaxy, while it too may have been insignificant on a cosmic scale, was of enormous importance to anything in our galaxy. Some of us call such a chance event a "frozen accident."

I like to quote an example from human history. When Arthur, the elder brother of King Henry VIII of England, died—no doubt of some quantum fluctuation—early in the sixteenth century, Henry replaced Arthur as heir to the throne and as the husband of Catherine of Aragón. That accident influenced the way the Church of England separated from the Roman Catholic Church (although the separation itself might have occurred anyway) and changed the history of the English and then the British monarchy, all the way down to the antics of Charles and Diana.

It is the frozen accidents, along with the fundamental laws, that give rise to regularities and thus to effective complexity. Since the fundamental laws are believed to be simple, it is mainly the frozen accidents that are responsible for effective complexity. We can relate that fact to the tendency for more and more complex entities to appear as time goes on.

Of course there is no rule that everything must increase in complexity. Any individual entity may increase or decrease in effective complexity or stay the same. When an organism dies or a civilization dies out, it suffers a dramatic decrease in complexity. But the envelope of effective complexity keeps getting pushed out, as more and more complex things arise.

The reason is that as time goes on frozen accidents keep accumulating, and so more and more effective complexity is possible. That is true even for non-adaptive evolution, as in galaxies, stars, planets, rocks, and so forth. It is well-known to be true of biological evolution, where in some cases higher effective complexity probably confers an advantage. And we see all around us the appearance of more and more complex regulations, instruments, computer software packages, and so forth, even though in many cases certain things are simplified.

The tendency of more and more complex forms to appear in no way contradicts the famous second law of thermodynamics, which states that for a closed (isolated) system, the average disorder ("entropy") keeps increasing. There is nothing in the second law to prevent local order from increasing, through various mechanisms of self-organization, at the expense of greater disorder elsewhere. (One simple and widespread mechanism of self-organization on a cosmic scale is provided by gravitation, which has caused material to condense into the familiar structures with which astronomy is concerned, including our own planet.)

Here on Earth, once it was formed, systems of increasing complexity have arisen as a consequence of the physical evolution of the planet over some four and half billion years, biological evolution over four billion years or so, and, over a very short period on a geological time scale, human cultural evolution.

The process has gone so far that we human beings are now confronted with immensely complex ecological and social problems, and we are in urgent need of better ways of dealing with them. When we attempt to tackle such difficult problems, we naturally tend to break them up into more manageable pieces. That is a useful practice, but it has serious limitations.

When dealing with any nonlinear system, especially a complex one, it is not sufficient to think of the system in terms of parts or aspects identified in advance, then to analyze those parts or aspects separately, and finally to combine those analyses in an attempt to describe the entire system. Such an approach is not, by itself, a successful way to understand the behavior of the system. In this sense there is truth in the old adage that the whole is more than the sum of its parts.

Unfortunately, in a great many places in our society, including academia and most bureaucracies, prestige accrues principally to those who study carefully some aspect of a problem, while discussion of the big picture is relegated to cocktail parties. It is of crucial importance that we learn to supplement those specialized studies with what I call a crude look at the whole.

Now the chief of an organization, say a head of government or a CEO, has to behave as if he or she is taking into account all the aspects of a situation, including the interactions among them, which are often strong. It is not so easy, however, for the chief to take a crude look at the whole if everyone else in the organization is concerned only with a partial view.

Even if some people are assigned to look at the big picture, it doesn’t always work out. A few months ago, the CEO of a gigantic corporation told me that he had a strategic planning staff to help him think about the future of the business, but that the members of that staff suffered from three defects:

    * They seemed largely disconnected from the rest of the company.

    * No one could understand what they said.

    * Everyone else seemed to hate them.

Despite such experiences, it is vitally important that we supplement our specialized studies with serious attempts to take a crude look at the whole.

At this conference, issues of global politics and security will be addressed, including ones specifically concerned with the security of the United States. But security narrowly defined depends in very important ways on security in the broadest sense. Some politicians deeply concerned about military strength appear to resent the idea of diluting that concern by emphasizing a broader conception of security, but many thinkers in the armed services themselves recognize that military security is deeply intertwined with all the other major global issues.

I like to discuss those issues under the rubric of sustainability, one of today’s favorite catchwords. It is rarely defined in a careful or consistent way, so perhaps I can be forgiven for attaching to it my own set of meanings. Broadly conceived, sustainability refers to quality that is not purchased mainly at the expense of the future—quality of human life and of the environment. But I use the term in a much more inclusive way than most people: sustainability is not restricted to environmental, demographic, and economic matters, but refers also to political, military, diplomatic, social, and institutional or governance issues—and ultimately sustainability depends on ideological issues and lifestyle choices. As used here, sustainability refers as much to sustainable peace, sustainable preparedness for possible conflict, sustainable global security arrangements, sustainable democracy and human rights, and sustainable communities and institutions as it does to sustainable population, economic activity, and ecological integrity.

All of these are closely interlinked, and security in the narrow sense is a critical part of the mix. In the presence of destructive war, it is hardly possible to protect nature very effectively or to keep some important human social ties from dissolving. Conversely, if resources are abused and human population is rapidly growing, or if communities lose their cohesion, conflicts are more likely to occur. If huge and conspicuous inequalities are present, people will be reluctant to restrain quantitative economic growth in favor of qualitative growth as would be required to achieve a measure of economic and environmental sustainability. At the same time, great inequalities may provide the excuse for demagogues to exploit or revive ethnic or class hatreds and provoke deadly conflict. And so forth.

In my book, The Quark and the Jaguar, I suggest that studies be undertaken of possible paths toward sustainability (in this very general sense) during the course of the next century, in the spirit of taking a crude look at the whole. I employ a modified version of a schema introduced by my friend James Gustave Speth, then president of the World Resources Institute and now head of the United Nations Development Program. The schema involves a set of interlinked transitions that have to occur if the world is to switch over from present trends toward a more sustainable situation:

1) The demographic transition to a roughly stable human population, worldwide and in each broad region. Without that, talk of sustainability seems almost pointless.

2) The technological transition to methods of supplying human needs and satisfying human desires with much lower environmental impact per person, for a given level of conventional prosperity.

3) The economic transition to a situation where growth in quality gradually replaces growth in quantity, while extreme poverty, which cries out for quantitative growth, is alleviated. (Analysts, by the way, are now beginning to use realistic measures of well-being that depart radically from narrow economic measures by including mental and physical health, education, and so forth.) The economic transition has to involve what economists call the internalization of externalities: prices must come much closer to reflecting true costs, including damage to the future.

4) The social transition to a society with less inequality, which, as remarked before, should make the decline of quantitative growth more acceptable. (For example, fuel taxes necessary for conservation adversely affect the poor who require transport to work, but the impact of such taxes can be reduced by giving a subsidy to the working poor—such as a negative income tax—that is not tied to fuel consumption.) The social transition includes a successful struggle against large-scale corruption, which can vitiate attempts to regulate any activity through law.

5) The institutional transition to more effective means of coping with conflict and with the management of the biosphere and human activities in it. We are now in an era of simultaneous globalization and fragmentation, in which the relevance of national governments is declining somewhat, even though the power to take action is still concentrated largely at that level. Most of our problems involving security—whether in the narrow or the broad sense—have global implications and require transnational institutions for their solution. We already have a wide variety of such institutions, formal and informal, and many of them are gradually gaining in effectiveness. But they need to become far more effective. Meanwhile, local and national institutions need to become more responsive and, in many places, much less corrupt. Such changes require the development of a strong sense of community and responsibility at many levels, but in a climate of political and economic freedom. How to achieve the necessary balance between cooperation and competition is the most difficult problem at every level.

6) The informational transition. Coping on local, national, and transnational levels with technological advances, environmental and demographic issues, social and economic problems, and questions of international security, as well as the strong interactions among all of them, requires a transition in the acquisition and dissemination of knowledge and understanding. Only if there is a higher degree of comprehension, among ordinary people as well as elite groups, of the complex issues facing humanity is there any hope of achieving sustainable quality. But most of the discussions of the new digital society concentrate on the dissemination and storage of information, much of it misinformation or badly organized information, rather than on the difficult and still poorly rewarded work of converting that so-called information into knowledge and understanding. And here again we encounter the pervasive need for a crude look at the whole

7) The ideological transition to a world view that combines local, national, and regional loyalties with a "planetary consciousness," a sense of solidarity with all human beings and, to some extent, all living things. Only by acknowledging the interdependence of all people and, indeed, of all life can we hope to broaden our individual outlooks so that they reach out in time and space to embrace the vital long-term issues and worldwide problems along with immediate concerns close to home. This transition may seem even more Utopian than some of the others, but if we are to manage conflict that is based on destructive particularism, it is essential that groups of people that have traditionally opposed one another acknowledge their common humanity.

Such a progressive extension of the concept of "us" has, after all, been a theme in human history from time immemorial. One dramatic manifestation is the greatly diminished likelihood over the last fifty years of armed conflict in Western Europe. Another is, of course, the radical transformation of relationships that is often called "The End of the Cold War." The recent damping-down of long-standing civil wars in a number of countries is also rather impressive.

Our tendency is to study separately the various aspects of human civilization that correspond to the different transitions. Moreover, in our individual political activities we tend to pick out just one or a few of these aspects. Some of us may belong to organizations favoring a strong defense or arms control or both, others to the United Nations Association of the United States, others to ZPG or the Population Council, some to organizations plumping for more assistance to developing countries or to ones working for more generous treatment of the poor in our own country, some to organizations promoting democracy and human rights, some to environmental organizations. But the issues dear to these various organizations are all tightly interlinked, and a portion of our activity needs to be devoted to examining the whole question of the approach to sustainability in all these different spheres.

It is reasonable to ask why a set of transitions to greater sustainability should be envisaged as a possibility during the coming century. The answer is that we are living in a very special time. Historians tend to be skeptical of most claims that a particular age is special, since such claims have been made so often. But this turn of the millennium really is special, not because of our arbitrary way of reckoning time but because of two related circumstances:

a) The changes that we humans produce in the biosphere, changes that were often remarkably destructive even in the distant past when our numbers were few, are now of order one. We have become capable of wiping out a very large fraction of humanity—and of living things generally—if a full-scale world war should break out. Even if it does not, we are still affecting the composition of the atmosphere, water resources, vegetation, and animal life in profound ways around the planet. While such effects of human activities have been surprisingly great in the past, they were not global in scope as they are now.

b) The graph of human population against time has the highest rate of increase ever, and that rate of increase is just beginning to decline. In other words, the curve is near what is called a "point of inflection."

For centuries, even millennia, world population was, to a fair approximation, inversely proportional to 2025 minus the year. (That is a solution of the equation in which the rate of change of a variable is proportional to its square.) Only during the last thirty years or so has the total number of human beings been deviating significantly from this formula, which would have had it becoming infinite a generation from now! The demographic transition thus appears to be under way at last. It is generally expected that world population will level off during the coming century at something like twice its present value, but decisions and events in the near future can affect the final figure by billions either way. That is especially significant in regions such as Africa, where present trends indicate a huge population increase very difficult to support and likely to contribute to severe environmental degradation. In general, the coming century, the century of inflection points in a number of crucial variables, seems to be the time when the human race might still accomplish the transitions to greater sustainability without going through disaster.

It is essential, in my opinion, to make some effort to search out in advance what kinds of paths might lead humanity to a reasonably sustainable and desirable world during the coming decades. And while the study of the many different subjects involved is being pursued by the appropriate specialists, we need to supplement that study with interdisciplinary investigations of the strong interdependence of all the principal facets of the world situation. In short, we need a crude look at the whole, treating global security and global politics as parts of a very general set of questions about the future.


| Complexity Index | Chapter 2 |


Return to The Clausewitz Homepage