In my August 8, 2011 post, "The End Of A Physics Worldview: Heraclitus And The Watershed of Life,", I argued that no laws entail the evolution of the biosphere. This argument has been augmented with mathematicians Giuseppe Longo and Mael Montevil on arXiv.org in "No entailing laws, but enablement in the evolution of the biosphere."
If we are right and no laws entail the evolution of the biosphere, can we still have biological laws? I think the answer is "yes," and here give an example, of which there are several.
In 1961 and 1963 Jacob and Monod cracked the problem of cell differentiation, that is how cells sharing the same set of genes could have different active genes in different cell types. They showed that genes could make proteins that turned other genes "on" or "off." Then, in their 1963 paper, they gave a simple example of two genes, A and B, each turing the other "off." This little genetic circuit has two stable steady state "attractors," A "on" and B "off," versus A "off" and B "on." These two attractors are two patterns of gene activity which might be two cell types with the same set of genes.
In order to study the behavior of large networks with thousands of genes turning one another on and off, in 1964 I invented the idea of Random Boolean Networks (RBN). Here each model gene is a binary, "on" or "off" device. A Random Boolean Network is constructed of N such model genes, each with inputs from K model genes, by assigning at random the K inputs to each gene from among the N, and assigning at random to each gene a "logic function" or "Boolean function," like "OR" or "AND," on its K inputs. Thus there is a vast class, or ensemble of model RBN genetic networks for any value of N and K. The question I sought to answer was: What are the typical, or "generic" behaviors of members of different ensembles as K varied from 1 to 2 to 3 to N.
What I found startled me. For K = 2, but all else assigned at random, such networks are a madhatterly scramble of inputs and logic, yet behave with stunning order. The attractors of such networks are tiny and highly localized in the "state space" of 2 to the N patterns, or states, of N genes being on or off simultaneously.
I presumed that cell types were "attractors," that is, repeating cycles of states to which the network settled. For K greater than 2, networks had very disordered, chaotic attractors. So K = 2 networks, with their tiny, highly localized attractors, were starting to be plausible models of genetic regulatory networks. Later we learned that the number of inputs can be larger than 2 and the networks behave with high order if the choice of Boolean functions is biased, not random.
If a cell type is an attractor and, like the tiny Jacob Monod circuit, can have more than one attractor, then how many different attractor cell types did such a network have? The answer is square root N, the number of genes. To my delight, plotting the number of cell types against DNA content per cell, across many phyla and organisms, the number of cell types is square root N. (These results have held up using more realistic model RBN with "asyncrhonous" timing of when each gene changes activities.)
More, it turned out that the typical number of states on a "state cycle" attractor, the repeating cyclic pattern of gene activities, or states, was also square root N. The most obvious cycle in cells is the cell cycle and if one plots cell cycle times against DNA per cell, it too is a square root function. Recent experiments have shown in yeast that if one blocks the biochemical oscillations of small molecules thought to time the yeast cell cycle, an oscillation of genes turning one another on and off (up and down) still occurs and the cell cycle persists.
Another property of interest is that each of these model cell type attractors is stable to most perturbations "flipping" the activity of any single gene, but can for some perturbations transition to another attractor cell type. Homeostasis and differentiation pathways among model cell types are "generic," like real cell differentiation.
Finally K = 2 and a whole subclass of networks turned out to be on the "edge of chaos," poised between an ordered regime and a chaotic regime at "criticality." Growing evidence suggests real cells are critical.
What are we to make of this kind of "explanation" and candidate laws for biology? First, after billions of years of evolution, real genetic networks are not random, but we can make "refined ensembles" that reflect this non-randomness. The typical, generic behaviors of random members of refined ensembles are better candidate laws.
But what kind of laws?
Aristotle proposed four kinds of causes: material, final, formal and efficient. An efficient cause is the sculptor chiseling the marble. A Formal cause is "what-it-is-to-be." The material cause of a house is the bricks. The final cause is the purpose or end, say, my desire to build the house.
Newton mathematized Aristotle's Efficient Cause in his laws of motion which give the forces among the billiard balls rolling on a billiard table. Then integration, given initial and boundary conditions, yields the deduced, hence entailed, trajectories of the balls. This has been our "covering law" model of scientific explanation since Newton: universal laws of motion, initial and boundary conditions and deduced, hence entailed, trajectories.
But ensemble models as above are clearly not efficient causes. What kind of explanation do they offer? I think the answer is that they offer Formal Cause Laws. For genetic regulatory networks: "what it is to be."
Then we can have laws about evolution where there are no entailing laws. These are Formal Cause Laws implied by refined ensemble theories.