Human beings have been aware of patterns in nature for as long as they have been human. Without such an awareness, human culture cannot function. Agriculture is impossible without an understanding of the seasons; even before agriculture, hunting depended upon animal life cycles and patterns as simple as the trajectory of a thrown rock. Humans have discovered that one of the most effective tools for grasping nature's patterns is mathematics.
Some physicists think that the universe is made of mathematics. Even if this is an exaggeration, there is no doubt that the universe often behaves as if mathematics is a central component of it. It is a message that goes back well before Plato, with his celebrated image of the real world as the shadow on a wall, cast by an ideal mathematical world that we glimpse only imperfectly. It was Pythagorean philosophy which maintained that all is number. Later thinkers added geometry as well; Johannes Kepler discovered that the orbit of Mars is an ellipse, a curve that featured prominently in the geometry of Euclid. It was this geometric pattern which led Isaac Newton to formulate the law of gravity. In so doing, he introduced new mathematical tools such as calculus and dynamics, and his "System of the World" was so successful that it became a model for succeeding generations.
But models change, and today we are starting to glimpse a new vision of mathematics in nature, one that is more flexible and adaptable than the numerology of Pythagoras or the rigid geometry of Euclid. This vision is providing a mathematical understanding of nature's processes as well as its structures.
hidden within the Newtonian paradise was a philosophical time bomb; it has just exploded. For centuries science made a plausible assumption: that the key to understanding nature is to find the simple laws which underpin it. Once the laws are known, all else follows. Not so. Thanks to various scientific mavericks, we now know that simple laws can give rise to behaviour of endless complexity-and conversely, that highly complex systems often exhibit well defined, but apparently acausal, patterns.
One area that stands to benefit from this new knowledge is biology. There are many biological phenomena that we do not understand in depth. One is the evolution of organised structure in the living world; another, our main focus here, is the occurrence of mathematical form in an organism.
The maverick who led the way in this field was Sir D'Arcy Wentworth Thompson: a Scot born in Edinburgh in 1860, both zoologist and classical scholar. At a time when most zoological studies were purely descriptive, D'Arcy Thompson became convinced that many aspects of biological form hinged on universal mathematical principles. In 1917 he assembled the evidence in his book On Growth and Form.
It led its readers from basic principles to generalities-demonstrating, for instance, that the body-plans of two different species of fish can be matched one-to-one by a simple co-ordinate transformation. It related the biology of dividing cells to the mathematics of surfaces of least area and the physics of soap bubbles. It assembled insights into the geometry and numerology of plants that spanned several centuries, centred upon the strange predilection of the plant kingdom for Fibonacci numbers-the sequence 1, 2, 3, 5, 8, 13, 21, 34, 55...-in which each number is the sum of the preceding two. Most flowers have a Fibonacci number of petals.
Despite Thompson's dazzling collection of biomathematical relationships, the mathematics of living form has proved less tractable than Sir D'Arcy expected. One reason is that genetic influences play a far greater role in determining biological form than he realised. A living creature does not follow some free-flowing pattern of growth; instead, chemical instructions written in its genes organise and constrain its growth in a manner that ultimately derives from the evolutionary imperative of survival. The simplest way to sum up what we now know is to say that Thompson was almost always wrong in important details, but almost always right in spirit.
One of the problems with being ahead of your time is that you have to work with inadequate tools. But a series of recent discoveries have given us a much more organic understanding of how mathematical rules create patterns. Indeed, our very conception of what a pattern is has begun to change. Chaos theory has taught us that simple deterministic rules can create intricate and apparently random effects, which means that we must distinguish between the patterns of structure that are made explicit by the rules, and the more subtle patterns of behaviour that the rules implicitly define. Complexity theory has shown that simple large scale patterns can "emerge" from the interactions between individual units, so that the capabilities of the whole may transcend those of its parts. Global principles such as symmetry and topology have provided ways to characterise and organise not only static patterns, but the dynamics that give birth to them, and how they grow or evolve.
Some of these discoveries originated thanks to another maverick who embellished Thompson's insights-a homosexual genius who, in an age of moral intolerance, took his own life. He was Alan Mathison Turing, remembered today as a father of the electronic computer and artificial intelligence. But Turing was also a pure mathematician, making important discoveries in number theory and abstract algebra; a logician, who found a simple route to Kurt Gödel's epic discoveries about the limitations of axiomatic systems as a base for mathematics; as well as a biomathematician. He sought a mathematical theory for the patterns which appear on animals-stripes, spots, dapples, patches. Unlike Thompson, his ideas went far beyond description: he proposed a general mechanism for the formation of biological pattern, inspiring a new branch of mathematics known as reaction-diffusion equations.
According to Turing, the development of mathematical patterns in animal markings arises through the combination of two processes. One involves some chemical "trigger" for pattern formation, a morphogen, which diffuses across the embryonic creature's surface, or even through its body. The other is a local chemical reaction, taking place inside each cell, which interacts with the diffusing morphogen. Mathematicians call the resulting system a reaction-diffusion equation.
One fascinating example is a celebrated laboratory experiment known as the Belousov-Zhabotinskii reaction (mercifully shortened to BZ), after the Russian who discovered it. Until then chemists expected all reactions to happen and then stop, but Belousov's reaction cycled through the same sequence of changes over and over again; in the process forming intricate spatial patterns. A dish containing a thin layer of evenly mixed red BZ liquid, left to its own devices, will develop a blue spot which grows into an ever-expanding series of concentric red and blue rings like an archery target. If the rings are broken they curl up into spirals.
Where do the patterns come from? The answer is that the patterns arise when a uniform distribution of morphogen becomes unstable. In a stable distribution, tiny irregularities are automatically smoothed out, but this does not always happen. In suitable circumstances, a tiny random fluctuation-triggered perhaps by the thermal vibration of atoms-may grow, and the bigger it becomes the faster it grows, until eventually other influences limit its size.
Like Thompson, Turing suffered the usual fate of brilliant mavericks: although right in spirit, he was wrong in detail. His choice of equations fails to predict such things as the dependence of pattern on temperature. The large iridescent spots on the tail of a peacock seem, at first sight, to be prime candidates for Turing's theory, but the spots extend across the many filaments of a single feather as if there were no joins, whereas a diffusing morphogen would get trapped at the boundaries. Moreover, the patterns are laid down long before the mature shape develops, but it is on the adult that we see the perfect Turingesque spots. So biologists lost confidence in Turing's equations, and forgot the greater insight, which is about mathematical structure, not specific equations.
Even now, Turing's theory undergoes periodic revivals. Last year two Japanese biomathematicians found that stripes on a particular species of tropical fish obey Turing's equations accurately. Their clinching argument was that over about three months the stripes on the fish move-and the changes are precisely as Turing's equations predict. But Turing's reaction-diffusion equations are only one example of a million possibilities with similar pattern-forming features. Over the last decade variants on Turing's scheme have been devised which come much closer to capturing biological realities: among these are the "mechano-chemical" equations, which perceive growth and pattern as an interaction between chemistry and geometry. The concentrations of various chemicals, at some position in or on an organism, determine not just its surface patterns, but also its growth rate, and hence the way its shape changes. On the other hand, changes in shape can stimulate or suppress the production of those very chemicals, so that shape and chemistry form an endless feedback loop.
while mathematically-minded scientists were developing general theories of pattern formation and applying them to everything they could lay their hands on, biology was marching to a different drumbeat. Two more mavericks, Francis Crick and James Watson, had discovered what Crick modestly called "the secret of life." With a mixture of calculation and guesswork, published results and gossip, and a spate of molecular model-building, Crick and Watson stumbled upon the double helix structure of DNA. Following this breakthrough, biologists were led to the genetic code, whereby triplets of DNA "bases" define the structure of the complex proteins from which living creatures are made. The genetic code underpins nearly all of our current understanding of how living organisms replicate, develop, mutate, and evolve; and it has led to the image of life as a kind of computer algorithm, programmed in DNA and implemented in protein. At first sight, such an image leaves little room for mathematical equations such as Turing's: programmes are flexible-you can write anything you want-but equations are constrained, they can do only what the mathematics permits. So today's biology has mostly turned its back on the mathematicians, seduced by the molecular biologists.
But the latest generation of science mavericks, still in hot pursuit of the principles underlying biological form and pattern, is attempting a kind of mathematico-biological fusion which, like the jazz-rock fusion of some 1980s pop music, tries to get the best of both worlds. Successes include an explanation of those puzzling Fibonacci patterns in flowers. Stephane Douady and Yves Couder, two French mathematicians, have traced the numerology to the dynamics of cell movement: seeds, sepals, and petals begin their existence as "primordia," little clumps of cells that emanate from the tip of the plant and drift outwards from it as the plant continues to grow. Each new primordium is crowded by the previous ones, so gets pushed into the biggest available gap. The result is that successive primordia appear along a tightly wound spiral, separated at intervals by the "golden angle," which is 137.5 degrees; the Fibonacci numbers show up as fractional approximations to this golden angle. So genetics can choose between Fibonacci numbers, or even (within limits defined by the same dynamical theory) choose a few other numbers instead, but it cannot escape the mathematical constraints of plant cell dynamics.
This kind of synthesis is wonderful when it succeeds, but it is a dangerous game because anybody can stick two different things together in a superficial way. The point is to create something that transcends either component-a kind of synergy that Jack Cohen and I, in The Collapse of Chaos, call "complicity." The American biomathematician Stuart Kauffman believes he can already discern the general framework needed. His inspiration is drawn from different areas of late 20th century science: chaos theory, artificial intelligence, computer simulation, non-equilibrium thermodynamics, fuzzy logic, cellular automata. It doesn't greatly matter what these names mean, but it is important to understand that the sources of inspiration are diverse and that each, individually, is seen by the orthodox as distinctly flaky. This makes the task of today's mavericks even harder, because a strange mixture of knowledge is required to understand what they are trying to say.
The concept that sums up this flaky proto-framework is "complexity theory," which is associated with the Santa Fe Institute set up in 1984 by the Nobel prize-winning physicist Murray Gell-Mann. Complexity theory is strong on computer simulations and short on rigorous mathematics. It has an unfortunate-but, as we shall see, inevitable-flavour of mysticism. Wildly interdisciplinary, it hops from history to economics to quantum mechanics and back. It stresses the failures of much current orthodoxy, instead of basking in its own successes. It attracts scientific journalists who can always find a good story; but it repels scientific conservatives who see the public face of complexity theory and don't like it, and who are not going to waste time trying to grasp the weird concepts which allegedly lie behind it.
The underlying theme of complexity theory is that systems composed of large numbers of simple agents, interacting with each other according to simple rules, often develop curious large scale patterns. Instead of trying to solve the mathematical equations for such systems (currently impossible), or to describe them verbally (too imprecise), complexity theorists simulate them on computers and analyse the results with strange new techniques.
What they find is stunning. For example, assemblies of simple mathematical "cells" (far simpler than any biological cell) equipped with simple rules for growth, division, relocation, and chemical constitution, spontaneously assemble themselves into complex structures, patterns, shapes. You don't have to try hard to get them to do this: almost any reasonable set of rules will do. Similarly, in complexity models of evolution, the typical phenomena are those which have puzzled evolutionary theorists for decades: sudden bursts of diversity, long periods of stagnation, the spontaneous increase of order, rudimentary forms of "social" and "sexual" behaviour. In complexity models of economics the typical events are stockmarket crashes, which conventional economic theory could never explain. Complexity models have a habit of corresponding to the real world. (Adjusting more conventional models to mimic such behaviour usually requires a great deal of effort and special pleading.)
What do these simulations teach us? Literal- minded biologists look at the simple rules programmed into the complexity theorist's model of cells, and observe that a real cell is far more complex. So they argue that the model is unrealistic, hence no use. But that's only true if you think the game is to reproduce nature exactly. Complexity theorists are playing a different game: gaining insight. Simple rule-based models often produce the same kind of structures and self-organisation which real biology does. There seems to be a kind of "universality" to the spontaneous patterns of rule-based complex systems. The hope is that this can be understood, so that the simplified models can be used to explain how real cells perform some-not all, but some important ones- of their remarkable functions.
Unfortunately, it is technically difficult to pin down what this universality consists of, when it is valid, and why it occurs. The current way to get round this is to examine dozens of different sets of rules and see whether the same kinds of patterns occur-a sort of Monte Carlo statistical sampling. What most scientists, orthodox or not, would prefer is a coherent logical deduction along mathematical lines. But this may be difficult or even impossible, because the patterns observed in complex systems are usually emergent: that is, they are not obviously built into the component parts or how these interact; they seem to transcend the possibilities available to their components.
This is where complexity theory cannot avoid becoming mystical-or, at least, giving that appearance. Philosophers have struggled for a long time with emergence, and they have reached very few significant conclusions. Emergence has a mystic feel- for example, human consciousness may well be an emergent property of nerve cells, but saying so doesn't bring us any closer to understanding consciousness scientifically. Some philosophers have become very confused about emergence and causality: they feel that because mind is emergent, and (apparently) possesses free will, a conscious mind can choose to interfere with the operation of its own nerve cells and thereby subvert the laws of physics.
The solution lies in a better understanding of emergence. The word "transcend" is too strong. Emergent phenomena do not disobey rules of causality. What makes them appear to lack a cause is that the chain of cause-and-effect leading from the behaviour of the components to that of the whole is so intricate that human beings cannot grasp it. Thus, even though a mind with free will cannot discern the complex causal chain from which its freedom emerges, it cannot choose to disrupt that chain without affecting itself. Emergence is like a mathematical theorem whose statement is simple but whose proof is so long that nobody can follow it.
Complexity theory seems mystical because it is a serious attempt to develop a coherent mathematical theory of emergence. And many of the problems that worry scientists centre upon emergent phenomena. A stock market crash, for example, is the result of a very complex series of trading decisions made by thousands of individuals. It need not have a single obvious cause; it is just one of the universal emergent properties of that sort of system. This does not answer the question "why do stock markets crash?" and it certainly does not help us predict crashes. But the first step in doing this must be to place the problem in the right context-and that context is emergent behaviour in complex systems, not the over-simplifications of conventional mathematical economics.
There are drawbacks too. Although complexity theory is good at reproducing the qualitative behaviour of the real world, it is not yet developed enough on the quantitative side. To critics this shows that the new ideas are all form and no substance, but to enthusiasts it merely underlines the need for further development. Cohen and I call this the "Rolls Royce problem." Someone comes along to your car factory with (they claim) a revolutionary new idea, and you say "I'll believe you if you can improve on a Rolls Royce." But years of effort by thousands of people went into the creation of the Rolls Royce. However good the new idea is, it won't be able to compete unless it is given sufficient opportunity to develop.
Should it be given that chance? Does anything important hinge on the tendency of the natural world to produce mathematical patterns? Why do we want to know how such patterns form, anyway? It would be hard to suggest that the occurrence of, say, Fibonacci numbers in flower petals has any great economic or technological significance. Our new-found understanding of the processes that create this particular pattern is a contribution not to technology, but to what used to be called "natural philosophy." This is often characterised as knowledge for its own sake, but knowledge doesn't have a sake-we do. The natural philosophers sought knowledge because they saw it as the key to understanding, and understanding as the key to power.
More than 250 years ago a group of mathematicians became interested in the vibrations of a violin string. The long term result was the discovery of radio waves, leading to radar, television, and satellite navigation systems. The connection is that electromagnetic fields can form waves, just as the violin string does-on a mathematical level there is no significant difference. The mathematicians were not looking for wireless telegraphy when they studied the vibrations of the violin. They weren't even trying to build a better violin. They just thought that waves were interesting, and wanted to understand them. Had they not done so, physicists would not have noticed that the laws of electromagnetism imply the existence of radio waves, and engineers would not have tried to build the requisite equipment to turn them into a communications medium.
There is an even better reason for studying the patterns of living creatures than just the interconnectedness of the whole. The distinction between BZ style spirals and target patterns is crucial to our own continued existence. The pacemaker wave of electrical activity that spreads over the muscle tissue of your heart, causing it to contract as a whole and thereby function as an effective pump, is a target pattern written in electricity rather than chemicals. If that pattern is disrupted, then-just like the chemical BZ pattern-it turns into masses of swirling spirals. Your heart fibrillates, its muscle fibres no longer contracting in synchrony, and you rapidly die.
More generally, the new ideas tell us that biological form and behaviour are determined by mathematics as well as by recipes encoded in DNA. A recipe alone is not enough: it has to be cooked, and as every cook knows, a perfectly reasonable recipe can sometimes turn into a disaster when it is implemented using real ingredients. You will not avoid such a disaster by studying the recipe book; instead, you must study the dynamics of cookery.
The financial structure of today's science makes it poor at supporting fascinating but flaky new ideas. A committee which must justify its actions to political or industrial masters (who may view any failure as wastage) will prefer to fund yet another piece of mainstream work rather than to gamble on novelty. This is why we need science mavericks like those who are uncovering nature's numbers.