WOLFRAM NOTEBOOK

Biology 101 - Lesson 03: The Recent Breakthrough

The Recent Breakthrough

Okay so what is his new work?
For a long time, as mentioned, he spent more of his efforts on the developmental side of biology, rather than the evolutionary side. In attending biology conferences he would often think about what is the space of all possible rules like, and in NKS, in the context of discussing evolution, he actually noted that nearby rules did seem to have similar behavior:
Out[]=
But, he never really had the intuition or motivation to think that this was worth exploring further.
That changed when he started doing more work on machine learning, both practically for the development of Wolfram Language (Mathematica), and scientifically in understanding how ChatGPT works. And in one of his posts on machine learning, in trying to answer the question, “Can one find a system that does X?” he attempted to compare what a neural network could achieve with what a random iterative search could achieve. This, in combination with hearing about his friend Greg Chaitlin’s work relating the Busy Beaver problem to evolution, he was motivated to try out some adaptive algorithms on cellular automata. More specifically, he tried “evolving” a cellular automata, by changing one rule at a time, to find a pattern that had a particular finite length:
And to his surprise, it actually worked, here is an example evolution to length 50:
Out[]=
This simple idea then blossomed into a bunch of new work in theoretical biology from Stephen in early 2024. From this model of evolution, there were many interesting experiments to try, conclusions to be drawn, and implications for biology to share. Before I get to those though, I should be more precise about how the “adaptive evolution” works.
Starting from the null rule, you make a single random change (mutation) to one rule in the automaton. If that rule, with an initial condition of a single red square, leads to a longer or equal length pattern (that is the number of steps until it goes to white), then you keep that change. Here’s what it looks like when we show successive changes to the ruleset, or “genotype”, that were kept:
Out[]=
And here is the “breakthrough phenotypes”, or steps in the evolution where a longer finite length (higher fitness) was achieved:
So already you can see the qualitative resemblance to biological evolution.
And the thing that is most striking about these patterns is their sheer complexity. In other words, the mechanisms that the pattern uses to achieve the “goal” of increasing finite lifetime, are very intricate and therefore difficult to explain. Thus, in many cases, the best we can do in answering the question of why this particular pattern was “selected”, will be to say, “because it was”.
And in fact, running more evolutions, we see that in each case, very different underlying mechanisms are used, which suggests that most of the details of the pattern are not a result of the selection process, but rather just happened to have occurred in that particular evolutionary path:
And Stephen shows that this is quite a general phenomena. Looking at different rule spaces, for example, we see more of the same, here is the k = 4, r = 1 case:
Or the two-dimensional case, where we also see considerable complexity:
Or even with different fitness functions, here we’re selecting for wider patterns instead of longer:
Or aspect ratio, here are some results of searching for a pattern that has a 3:1 aspect ratio:
So the claim is that natural selection in biology works similarly to these cases in the sense that evolution selects organisms based on some high-level metric of fitness. But, because biological systems are capable of arbitrarily complex behavior (just like our automata), the processes used to satisfy the fitness functions will be full of irreducibility.
In Stephen’s language, evolution is like a computationally-bounded observer of computationally unbounded processes.
In NKS, Stephen had models for subsystems within biology, like pigmentation, growth and shape. And with these models he showed that much of the variation and complexity of these subsystems arose as a consequence of computational irreducibility.
With these new adapted cellular automata, he now has a generalized model for all biological processes, not just pigmentation, growth and shape. And with this more abstract model, he can provide a clearer picture for the behavior of biological processes in general.
And as we’ve seen, these pictures strongly suggest once again that the dominant force in biological systems is computational irreducibility and not natural selection.
Okay, so what else can we takeaway from this new model?
So far, we’ve been discussing the individual patterns, but what about the space of all possible patterns?
Well, we can actually draw a “fitness landscape” by showing the graph of all possible rules:
Here the height corresponds to the length of the automata or “fitness”. And the goal of adaptive evolution is thus to trace a never go down path in this landscape:
And from this, we can get a sense of what is required for evolution to work. First, there needs to be enough dimensionality in the landscape. Now, actual genomes are way bigger than the number of rules we have here, so that isn’t an issue. Second, the “viable nodes” defined by the fitness function need to be numerous enough so that we don’t get stuck.
But these criteria alone aren’t enough because we could presumably have all the fit nodes cluster in one area. And the key point is that because of computational irreducibility, this won’t happen, and the nodes will be spread out enough so that we can indeed find an evolutionary path.
And this allows us to see how biological systems can navigate their complicated fitness landscapes, and make progress towards some high-level fitness function.
Similarly, because of the simplicity of the model, we can actually draw the multiway graph of all viable patterns (for the k = 2, r = 3/2 rule space):
And from this, interestingly, we can see our model’s version of discrete branches in the tree of life.
Zooming out further, we can look at this multiway graph (in the larger symmetric k = 2, r = 2 rule space), but instead of just one fitness-function, we can compare two different fitness functions:
And so what we’re starting to see, is that we can view evolution as acting as a computationally bounded observer, selecting particular slices of the rule space. And Stephen then connects this idea to observers in physics, suggesting that much like there is the Second Law of Thermodynamics, there might also be similar high-level laws for the macroscopic “flow of evolution”.
Wolfram Cloud

You are using a browser not supported by the Wolfram Cloud

Supported browsers include recent versions of Chrome, Edge, Firefox and Safari.


I understand and wish to continue anyway »

You are using a browser not supported by the Wolfram Cloud. Supported browsers include recent versions of Chrome, Edge, Firefox and Safari.