Many systems in nature consist of a large number of relatively simple units that interact only locally, and without a central control, yet the system as a whole can perform sophisticated global information processing, or produce intricate globally coordinated behaviors. A well-known example of this is quorum sensing in microbial communities, where the basic units are bacteria.

*Quorum sensing* is the ability of bacteria (or other single-celled organisms) to determine population densities in order to coordinate the gene expression, biofilm formation, or other activities of the entire community. However, none of the individual bacteria has the ability to communicate directly with the entire colony. Each bacterium can only communicate and interact locally, through the exchange of chemical signals with its direct neighbors, and there is no central control in the system. Yet the community as a whole manages to assess (changes in) overall population densities and respond in a globally coordinated manner. The ability for quorum sensing has also been observed in communities of multicellular organisms such as plants and social insects.

Such global information processing or globally coordinated behavior, arising out of just local interactions without a central control, is generally referred to as *emergent computation*. Other examples of emergent computation in biological systems are sensory input classification through synchronized oscillations in the brain (where the basic units are neurons), synchronous flashing in firefly colonies (where the basic units are individual flies), and the aggregation of single-celled *Dictyostelium discoideum* amoeba into one multicellular body (where the basic units are the individual amoeba).

It is still not fully understood, though, how such emergent computation is achieved, or how this ability has evolved over time in biological systems. However, computer models of decentralized systems with only local interactions can help in providing some deeper insight into these questions. In particular, a simple class of models known as *cellular automata* has been studied extensively in this context.

## Cellular Automata

In its simplest form, a *cellular automaton* (CA) consists of a linear array of “cells”, each of which can be in one of two states, say zero or one. At discrete time steps (or iterations), all cells simultaneously update their state according to a fixed update rule, which depends on their current local *neighborhood configuration* consisting of a cell itself together with its left neighbor and right neighbor. This update rule simply states for each possible local neighborhood configuration what the new state of the center cell will be. Given two possible states and a three-cell neighborhood, there are thus 2x2x2=2^{3}=8 possible neighborhood configurations. An example of such an update rule is given in the table below, with the eight possible local neighborhood configurations given in the top row, and the corresponding new cell states in the bottom row.

neighborhood: | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111 |

new state: | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 |

This simplest form of a CA is known as an *elementary cellular automaton* (ECA), and the particular update rule shown in the table above is known as ECA 18. However, note that the bottom row (new cell states) could have had any assignment of zeros and ones, giving rise to 2^{8}=256 possible ECA update rules. Depending on which particular rule is used, the system as a whole (the entire array of cells iterated over time) can show very different types of dynamical behaviors, from a fixed point or simple periodic, to very complex or even (seemingly) random behavior.

To illustrate this, the figure below shows the dynamical behavior of four different ECAs in so-called *space-time diagrams*, with white representing the state zero and black representing the state one. In these diagrams, space is horizontal (100 cells wide) and time is vertical (100 iterations down). The top row in each diagram is the *initial configuration* (IC), which was generated at random. Each next row in a space-time diagram is the CA configuration after applying the respective update rule to all cells simultaneously. Note also that *periodic boundary conditions* are used, i.e., the array of cells is considered to be circular so that the left-most cell and the right-most cell are each others’ neighbors.

The space-time diagram in the bottom-left of the figure was produced using the update rule of ECA 18 as given in the table above. This elementary CA is actually capable of generating patterns that look very similar to those observed in some biological systems such as seashells, as the image below shows. Using a simple online app, you can explore different ECA update rules yourself.

## Emergent computation in CAs

An example of actual emergent computation in cellular automata is provided by the so-called *GKL rule*. This CA update rule, named after its designers Gacs, Kurdyumov, and Levin, is defined on a local neighborhood of *seven* cells. In other words, the new state of a cell depends on its own current state and that of its three nearest neighbors on either side (called a *radius* of three). This gives rise to an update rule with 2^{7}=128 entries. However, rather than stating the full 128-entry table, the GKL rule can be described more easily in words: If a cell is currently in state 0 (white), its next state will be the majority of the current states of the cell itself and its first and third neighbor to the *left*. If a cell is currently in state 1 (black), its next state will be the majority of the current states of the cell itself and its first and third neighbor to the *right*.

The GKL rule was originally designed to provide robustness against small errors (noise) in the CA’s dynamical behavior. However, it also turned out to be performing quite well on a computational task known as *density classification*, where the CA has to decide whether there are more white cells or more black cells in the initial configuration (IC). In particular, if there are more white cells in the IC, the CA eventually has to settle down to an all-white configuration (within a limited number of iterations). On the other hand, if there are more black cells in the IC, it has to settle down to an all-black configuration. The GKL rule performs this task with about 80% accuracy, i.e., it gives the correct answer (settles down to the correct answer state) on about 80% of random ICs. The figure below shows the dynamical behavior of the GKL rule.

Note that density classification is a non-trivial task for a CA, as it requires global information processing (the overall density of black and white cells is a global property of the system), even though each individual cell can only interact locally, without any central control. This computational task for a CA is thus very similar to, e.g., quorum sensing in bacterial colonies.

As the above space-time diagrams show, the dynamics of the GKL rule quickly settles down into local all-white or all-black regions, thus representing local densities. However, the conflicts between these local regions need to be resolved, which is mostly done by means of a third pattern, the checkerboard (alternating black and white) regions. This way, the local conflicts are resolved at larger and larger time and length scales, until eventually a global answer is reached.

## Evolution of emergent computation

The GKL rule shows how global information processing, or globally coordinated behavior, can be achieved in decentralized systems with only local interactions. However, this CA rule was designed by hand and does not yet provide any insight into how, or whether, evolution could give rise to such behavior. To investigate this question, a genetic algorithm can be used to try and *evolve* emergent computation in cellular automata.

A *genetic algorithm* is a simple computer program that mimics natural evolution, and which is often used to find good approximate solutions to difficult optimization problems (see our earlier article about such evolutionary algorithms). However, such a computer program can also be used as a model of an evolutionary process. In particular, it can be used to evolve cellular automata update rules to perform the density classification task.

Physicist Norman Packard was the first to use a genetic algorithm to search through the large space of possible radius-three CA update rules to try and find one that can perform the density classification task. This was followed by more detailed studies by the (former) Evolving Cellular Automata group, led by computer scientist Melanie Mitchell and physicist Jim Crutchfield, and based at the Santa Fe Institute. Their genetic algorithm experiments resulted in CA update rules with an accuracy of more than 80% in performing the density classification task.

Space-time diagrams of one of the best of these CA update rules are shown in the figure below. Each next image shows the dynamical behavior of this evolved CA but with a different (random) initial configuration. Interestingly, the overall behavior of this CA (and also its performance on the density classification task) is similar to that of the hand-designed GKL rule. It also uses the checkerboard pattern to resolve conflicts between local high and low-density regions.

These studies clearly showed that, indeed, emergent computation can be evolved in cellular automata. Furthermore, these studies also provided deeper insight into the actual evolutionary history of these high-performance CAs (i.e., how this capability evolved over time), and also into how exactly this emergent computation is achieved in these decentralized systems with only local interactions. The technical details were presented in the various publications of the group. A more gentle overview is provided in this historical review.

## Back to nature

Even though the results presented here are based on simple computer models, the lessons learned from these evolved cellular automata do actually provide useful insights into emergent computation in real biological systems. For example, a fascinating study compared stomatal dynamics on plant leaves (which supposedly optimizes CO_{2} uptake while minimizing water loss) with the dynamics observed in CAs performing the density classification task.

Based on both qualitative and quantitative comparisons, the researchers who performed this study conclude that they “have demonstrated that the dynamical properties of stomatal opening and closing on a leaf are essentially identical to those of some CAs that perform emergent, distributed computation”. In particular, they observe “patches” of all-open or all-closed stomata, with the boundaries between these patches seemingly moving along the leaf surface as the stomata (synchronously) open and close. This is indeed very similar to the local all-white and all-black regions and their moving and interacting boundaries in the CAs evolved to perform density classification.

Similarly, competing patterns in the global dynamics of aggregating *Dictyostelium* amoeba have been observed. In particular, there is competition between regular wavefronts and spiral waves, both of which emerge from the local cell-cell chemical signaling, and depending on certain conditions (such as density of amoeba) one or the other eventually wins out. Again, this is very similar to the dynamics of competing local all-white and all-black regions in the CAs evolved to perform the density classification task.

In short, it seems that the global dynamics as observed in cellular automata that were evolved to perform emergent computation can indeed be mapped quite closely onto similar patterns observed in real biological systems. Undoubtedly there are many other real-world examples, given the ubiquity of emergent computation in nature. However, natural systems are often cumbersome to study in great detail. Fortunately, simple computer models like cellular automata and genetic algorithms can help us understand more about the evolution of emergent computation.