A Mathematician's View of Evolution

Granville Sewell

Mathematics Dept.

University of Texas El Paso

The Mathematical Intelligencer 22, no. 4 (2000), pp5-7

Copyright held by Springer Verlag, NY, LLC

The original published version is available at link.springer.com here and here.


Recent (2014+) Books by Granville Sewell

Click on book covers. (Christianity for Doubters has a new website.)


In 1996, Lehigh University biochemist Michael Behe published a book entitled "Darwin's Black Box" [Free Press], whose central theme is that every living cell is loaded with features and biochemical processes which are "irreducibly complex"--that is, they require the existence of numerous complex components, each essential for function. Thus, these features and processes cannot be explained by gradual Darwinian improvements, because until all the components are in place, these assemblages are completely useless, and thus provide no selective advantage. Behe spends over 100 pages describing some of these irreducibly complex biochemical systems in detail, then summarizes the results of an exhaustive search of the biochemical literature for Darwinian explanations. He concludes that while biochemistry texts often pay lip-service to the idea that natural selection of random mutations can explain everything in the cell, such claims are pure "bluster", because "there is no publication in the scientific literature that describes how molecular evolution of any real, complex, biochemical system either did occur or even might have occurred."

When Dr. Behe was at the University of Texas El Paso in May of 1997 to give an invited talk, I told him that I thought he would find more support for his ideas in mathematics, physics and computer science departments than in his own field. I know a good many mathematicians, physicists and computer scientists who, like me, are appalled that Darwin's explanation for the development of life is so widely accepted in the life sciences. Few of them ever speak out or write on this issue, however--perhaps because they feel the question is simply out of their domain. However, I believe there are two central arguments against Darwinism, and both seem to be most readily appreciated by those in the more mathematical sciences.

  1. The cornerstone of Darwinism is the idea that major (complex) improvements can be built up through many minor improvements; that the new organs and new systems of organs which gave rise to new orders, classes and phyla developed gradually, through many very minor improvements. We should first note that the fossil record does not support this idea, for example, Harvard paleontologist George Gaylord Simpson ["The History of Life," in Volume I of "Evolution after Darwin," University of Chicago Press, 1960] writes:
    "It is a feature of the known fossil record that most taxa appear abruptly. They are not, as a rule, led up to by a sequence of almost imperceptibly changing forerunners such as Darwin believed should be usual in evolution...This phenomenon becomes more universal and more intense as the hierarchy of categories is ascended. Gaps among known species are sporadic and often small. Gaps among known orders, classes and phyla are systematic and almost always large. These peculiarities of the record pose one of the most important theoretical problems in the whole history of life: Is the sudden appearance of higher categories a phenomenon of evolution or of the record only, due to sampling bias and other inadequacies?"
    An April, 1982, Life Magazine article (excerpted from Francis Hitching's book, "The Neck of the Giraffe: Where Darwin Went Wrong") contains the following report:

    "When you look for links between major groups of animals, they simply aren't there...'Instead of finding the gradual unfolding of life', writes David M. Raup, a curator of Chicago's Field Museum of Natural History, 'what geologists of Darwin's time and geologists of the present day actually find is a highly uneven or jerky record; that is, species appear in the fossil sequence very suddenly, show little or no change during their existence, then abruptly disappear.' These are not negligible gaps. They are periods, in all the major evolutionary transitions, when immense physiological changes had to take place."
    Even among biologists, the idea that new organs, and thus higher categories, could develop gradually through tiny improvements has often been challenged. How could the "survival of the fittest" guide the development of new organs through their initial useless stages, during which they obviously present no selective advantage? (This is often referred to as the "problem of novelties".) Or guide the development of entire new systems, such as nervous, circulatory, digestive, respiratory and reproductive systems, which would require the simultaneous development of several new interdependent organs, none of which is useful, or provides any selective advantage, by itself? French biologist Jean Rostand, for example, wrote ["A Biologist's View," Wm. Heinemann Ltd. 1956]:

    "It does not seem strictly impossible that mutations should have introduced into the animal kingdom the differences which exist between one species and the next...hence it is very tempting to lay also at their door the differences between classes, families and orders, and, in short, the whole of evolution. But it is obvious that such an extrapolation involves the gratuitous attribution to the mutations of the past of a magnitude and power of innovation much greater than is shown by those of today."
    Behe's book is primarily a challenge to this cornerstone of Darwinism at the microscopic level. Although we may not be familiar with the complex biochemical systems discussed in this book, I believe mathematicians are well qualified to appreciate the general ideas involved. And although an analogy is only an analogy, perhaps the best way to understand Behe's argument is by comparing the development of the genetic code of life with the development of a computer program. Suppose an engineer attempts to design a structural analysis computer program, writing it in a machine language that is totally unknown to him. He simply types out random characters at his keyboard, and periodically runs tests on the program to recognize and select out chance improvements when they occur. The improvements are permanently incorporated into the program while the other changes are discarded. If our engineer continues this process of random changes and testing for a long enough time, could he eventually develop a sophisticated structural analysis program? (Of course, when intelligent humans decide what constitutes an "improvement", this is really artificial selection, so the analogy is far too generous.)

    If a billion engineers were to type at the rate of one random character per second, there is virtually no chance that any one of them would, given the 4.5 billion year age of the Earth to work on it, accidentally duplicate a given 20-character improvement. Thus our engineer cannot count on making any major improvements through chance alone. But could he not perhaps make progress through the accumulation of very small improvements? The Darwinist would presumably say, yes, but to anyone who has had minimal programming experience this idea is equally implausible. Major improvements to a computer program often require the addition or modification of hundreds of interdependent lines, no one of which makes any sense, or results in any improvement, when added by itself. Even the smallest improvements usually require adding several new lines. It is conceivable that a programmer unable to look ahead more than 5 or 6 characters at a time might be able to make some very slight improvements to a computer program, but it is inconceivable that he could design anything sophisticated without the ability to plan far ahead and to guide his changes toward that plan.

    If archeologists of some future society were to unearth the many versions of my PDE solver, PDE2D , which I have produced over the last 20 years, they would certainly note a steady increase in complexity over time, and they would see many obvious similarities between each new version and the previous one. In the beginning it was only able to solve a single linear, steady-state, 2D equation in a polygonal region. Since then, PDE2D has developed many new abilities: it now solves nonlinear problems, time-dependent and eigenvalue problems, systems of simultaneous equations, and it now handles general curved 2D regions. Over the years, many new types of graphical output capabilities have evolved, and in 1991 it developed an interactive preprocessor, and more recently PDE2D has adapted to 3D and 1D problems. An archeologist attempting to explain the evolution of this computer program in terms of many tiny improvements might be puzzled to find that each of these major advances (new classes or phyla??) appeared suddenly in new versions; for example, the ability to solve 3D problems first appeared in version 4.0. Less major improvements (new families or orders??) appeared suddenly in new subversions, for example, the ability to solve 3D problems with periodic boundary conditions first appeared in version 5.6. In fact, the record of PDE2D's development would be similar to the fossil record, with large gaps where major new features appeared, and smaller gaps where minor ones appeared. That is because the multitude of intermediate programs between versions or subversions which the archeologist might expect to find never existed, because-- for example--none of the changes I made for edition 4.0 made any sense, or provided PDE2D any advantage whatever in solving 3D problems (or anything else) until hundreds of lines had been added.

    Whether at the microscopic or macroscopic level, major, complex, evolutionary advances, involving new features (as opposed to minor, quantitative changes such as an increase in the length of the giraffe's neck, or the darkening of the wings of a moth, which clearly could occur gradually) also involve the addition of many interrelated and interdependent pieces. These complex advances, like those made to computer programs, are not always "irreducibly complex"--sometimes there are intermediate useful stages. But just as major improvements to a computer program cannot be made 5 or 6 characters at a time, certainly no major evolutionary advance is reducible to a chain of tiny improvements, each small enough to be bridged by a single random mutation.

  2. The other point is very simple, but also seems to be appreciated only by more mathematically-oriented people. It is that to attribute the development of life on Earth to natural selection is to assign to it--and to it alone, of all known natural "forces"--the ability to violate the second law of thermodynamics and to cause order to arise from disorder. It is often argued that since the Earth is not a closed system--it receives energy from the Sun, for example-- the second law is not applicable in this case. It is true that order can increase locally, if the local increase is compensated by a decrease elsewhere, ie, an open system can be taken to a less probable state by importing order from outside. For example, we could transport a truckload of encyclopedias and computers to the moon, thereby increasing the order on the moon, without violating the second law. But the second law of thermodynamics--at least the underlying principle behind this law--simply says that natural forces do not cause extremely improbable things to happen,1 and it is absurd to argue that because the Earth receives energy from the Sun, this principle was not violated here when the original rearrangement of atoms into encyclopedias and computers occurred.

    The biologist studies the details of natural history, and when he looks at the similarities between two species of butterflies, he is understandably reluctant to attribute the small differences to the supernatural. But the mathematician or physicist is likely to take the broader view. I imagine visiting the Earth when it was young and returning now to find highways with automobiles on them, airports with jet airplanes, and tall buildings full of complicated equipment, such as televisions, telephones and computers. Then I imagine the construction of a gigantic computer model2 which starts with the initial conditions on Earth 4 billion years ago and tries to simulate the effects that the four known forces of physics (the gravitational, electromagnetic and strong and weak nuclear forces) would have on every atom and every subatomic particle on our planet (perhaps using random number generators to model quantum uncertainties!). If we ran such a simulation out to the present day, would it predict that the basic forces of Nature would reorganize the basic particles of Nature into libraries full of encyclopedias, science texts and novels, nuclear power plants, aircraft carriers with supersonic jets parked on deck, and computers connected to laser printers, CRTs and keyboards? If we graphically displayed the positions of the atoms at the end of the simulation, would we find that cars and trucks had formed, or that supercomputers had arisen? Certainly we would not, and I do not believe that adding sunlight to the model would help much. Clearly something extremely improbable has happened here on our planet, with the origin and development of life, and especially with the development of human consciousness and creativity.


footnotes

1An unfortunate choice of words. I should have said, the underlying principle behind the second law is that natural forces do not do macroscopically describable things which are extremely improbable from the microscopic point of view. See my 2017 Physics Essays article "On 'Compensating' Entropy Decreases" for a more complete treatment of this point.

2Of course constructing such a model is impossible but imagining it gets across the point that natural selection, the one unintelligent "force" that is widely credited with the ability to create spectacular order out of disorder, is not a real physical force and cannot be included in such a simulation; as well as the point that unintelligent forces cannot explain human intelligence.