This View of Life Anything and everything from an evolutionary perspective.
FIND tvol:
The Evolution of Language and its Speakers
AUTHOR
Daniel Dor
Daniel Dor
is head of the Dan Department of Communication, Tel Aviv University.

Reprinted with permission from The Instruction of Imagination: Language as a Social Communication Technology by Daniel Dor, published by Oxford University Press. © 2016 by Daniel Dor. All rights reserved.

Go here for Brian Boyd’s interview with Daniel Dor.

The question of the emergence and evolution of language is recognized today as one of the central questions in the whole of science—certainly the most crucial issue we have to contend with if we wish to understand how the human species came to be what it is. In the last three decades, the question has inspired an unprecedented wave of re- search, in which scholars from a wide array of disciplines—linguistics and philosophy, the different branches of psychology, anthropology and sociology, paleontology and archeology, evolutionary biology and genetics, primatology and ethology, neuroscience and computer science—have been collaborating to nd and interpret clues to what actually happened. The fact that we have no direct evidence—language does not leave material traces behind it—forces us to adopt a detective’s mindset, searching for pieces of circumstantial evidence that we then try to piece together into theoretically plausible hypotheses. Disciplinary boundaries lose their significance: every piece of evidence counts.

Sign up for our newsletters

I wish to receive updates from:
Newsletter



The question of the evolution of language, however, is not just important for its own sake. It should also be properly understood as the most crucial bottleneck that any theory of language should be able to squeeze through. With all the advances in the linguistic sciences, evolutionary biology is still light years ahead of us in maturity, sophistication, insight, and methodology: we have a much better understanding of the nature of evolution than we have of the nature of language. For every theoretical model of language we should thus ask: how is this evolvable?

In the final account, then, the deepest paradox of Chomsky’s program is the fact that it does not squeeze through the bottleneck: if language is genetically given to us as a universal cognitive capacity, we should have somehow evolved to get there. But if language as an innate cognitive capacity is universal and non-functional, infinitely generative and static, its emergence in the life of the human species cannot be explained in evolutionary terms. For this, of course, only evolutionary theory should be blamed. A language of this type would make sense in other worldviews—in the one, for example, that has the gift of language bestowed upon all human souls by a superior power. But the replacement of creationism with the evolutionary perspective carries certain implications that cannot be ignored: things in this world arrange themselves in complex patterns of variability; they develop and evolve because they are functional; they are always finite; and they are always dynamic. This is what evolution is all about.

Chomksy himself, in the first four decades after Syntactic Structures, consistently refused to discuss the question of evolution. It is a mystery, he used to say, not a scientific problem: trying to deal with it would be “as absurd as it would be to speculate about the ‘evolution’ of atoms from clouds of elementary particles” (Chomsky 1968, p. 61). Some of his colleagues—Pinker and Bloom (1990), Pinker (1994), Jackendo and Pinker (2005), Jackendo (1999), Piattelli-Palmarini (1989), and others—have claimed over the years that there is common ground to be found between the generative program and evolutionary theory, but Chomsky was actually right all along: the evolution of his language is indeed a mystery. In the last decade, however, Chomsky seems to have relented. In Hauser, Chomsky, and Fitch (2002), he offers a tentative conceptual solution to the problem. The solution, however, is so strange, so unconvincing, that the article actually seems to require a deeper interpretation—as an implicit statement of resignation. To begin with, the three authors insist that the “acrimonious debates in this field” have been launched by the failure to distinguish between “questions concerning language as a communicative system and questions concerning the computations underlying this system, such as those underlying recursion.” Questions about language as a communicative system, the authors state, are not questions about language as such, but about “the interface between abstract computation and both sensory-motor and conceptual-intentional interfaces.” They then make a distinction between two different types of human faculties for language. The first, FLB, the Faculty of Language in the Broad sense, includes “a sensory-motor system, a conceptual-intentional system, and the computational mechanisms for recursion, providing the capacity to generate an infinite range of expressions from a finite set of elements.” All these are not problematic from the evolutionary point of view—they may all have precursors in other species—but they are not what makes human language what it is. The secret of language lies in the FLN, the Faculty of Language in the Narrow sense, and the evolution of this faculty presents the theory of evolution with its “deepest challenge.” What, then, does FLN include? It turns out that it only includes one property—recursion:

The internal architecture of FLN, so conceived, is a topic of much current research and debate. Without prejudging the issues, we will, for concreteness, adopt a particular conception of this architecture. We assume, putting aside the precise mechanisms, that a key component of FLN is a computational system (narrow syntax) that generates internal representations and maps them into the sensory-motor interface by the phonological system, and into the conceptual-intentional interface by the (formal) semantic system; adopting alternatives that have been proposed would not materially modify the ensuing discussion. All approaches agree that a core property of FLN is recursion, attributed to narrow syntax in the conception just outlined. FLN takes a finite set of elements and yields a potentially infinite array of discrete expressions. This capacity of FLN yields discrete infinity (a property that also characterizes the natural numbers). Each of these discrete expressions is then passed to the sensory-motor and conceptual-intentional systems, which process and elaborate this information in the use of language (p. 1571).

This is it, then: the potential capacity for discrete infinity—which is never observed in reality, because certain properties of FLB, such as memory and processing limitations, prevent it from ever materializing—is the one thing that makes human language such a unique phenomenon in the biological world. And how did this capacity evolve? To begin with, the hypothesis that FLN only includes recursion “has the interesting effect of nullifying the argument from design, and thus rendering the status of FLN as an adaptation open to question. Proponents of the idea that FLN is an adaptation would thus need to supply additional data or arguments to support this viewpoint.” Until otherwise demonstrated, FLN is not adaptive. So, how did it evolve? Well, it “may have evolved for reasons other than language,” such as “navigation, number quantification, or social relationships”:

[W]e suggest that by considering the possibility that FLN evolved for reasons other than language, the comparative door has been opened in a new and (we think) exciting way. Comparative work has generally focused on animal communication or the capacity to acquire a human-created language. If, however, one entertains the hypothesis that recursion evolved to solve other computational problems such as navigation, number quantification, or social relationships, then it is possible that other animals have such abilities, but our research efforts have been targeted at an overly narrow search space. If we find evidence for recursion in animals, but in a noncommunicative domain, then we are more likely to pinpoint the mechanisms underlying this ability and the selective pressures that led to it. This discovery, in turn, would open the door to another suite of puzzles: Why did humans, but no other animal, take the power of recursion to create an open-ended and limitless system of communication? Why does our system of recursion operate over a broader range of elements or inputs (e.g., numbers, words) than other animals? One possibility, consistent with current thinking in the cognitive sciences, is that recursion in animals represents a modular system designed for a particular function (e.g., navigation) and impenetrable with respect to other systems. During evolution, the modular and highly domain-specific system of recursion may have become penetrable and domain-general. This opened the way for humans, perhaps uniquely, to apply the power of recursion to other problems. This change from domain-specific to domain-general may have been guided by particular selective pressures, unique to our evolutionary past, or as a consequence (by-product) of other kinds of neural reorganization. (p. 1578)

This is where the solution ends. The potential for discrete infinity may have evolved for other reasons, and it may have become penetrable, in humans, to cognitive domains other than the one within which it originally evolved—which means that it may be discovered in other animals, in non-communicative domains, and if it were indeed discovered in other animals, the door would be opened “to another suite of puzzles.” Now, in order to understand the unique human capacity for language, all we would have to do is figure out why the capacity remained domain-specific in the other species.

Well, this is really not a solution. Not even a tentative one. There is nothing here but a weary and desperate attempt to keep the essence of language (whatever is left of it) in the realm of mystery—away from the domain of evolutionary explanation. Of course, capacities may evolve for one function and then be adapted for others, and they may also be by-products of other “kinds of neural reorganizations,” but in such processes the capacities evolve and change to fit their new functional contexts: they do not simply stay the same. What is even more problematic is the capacity itself that is thus salvaged from explanation. After fifty years of research, all that is left is the original assumption of infinite generativity—the idea that everything we ever do and experience, which is finite by definition, is always an arbitrary obstacle on our way toward the fulfillment and understanding of our infinite linguistic potential. This is a philosophical assumption, actually a religious assumption, that goes against the very idea of science. In this sense, the series of articles by Hauser, Chomsky, and Fitch might be more favorably read as joint statements of resignation: we have tried to find common ground between linguistics and evolutionary science; as far as the periphery of language is concerned, we believe there is no real problem; at its core, however, language still seems to defy the mode of explanation that is at the core of evolutionary theory; maybe, only maybe, what we believe about the core of language might be reconciled with something at the periphery of evolutionary theory; but beyond that, we really have nothing to offer. The mystery is there to stay.

There is, of course, something deeply ironic in all this. If it were not for Chomsky’s insistence on the innateness of language, the very question of the evolution of language (in its modern form) would probably not have emerged in the first place—and it would definitely not have assumed the central position it now holds in the linguistic discourse. It may safely be said that virtually all the high-quality work in the field of language evolution—almost regardless of theoretical inclination—has been motivated by the attempt to salvage language from the Chomskian paradox of unevolvable innateness: to find a way to make language (innate or not) evolvable again. Most of the theories in the field show a serious commitment to the logic of evolutionary theory, a sophisticated approach to the question of evidence, a deep understanding of multiple causality, a basic suspicion toward the idea that actual pieces of linguistic knowledge are encoded in our genes, a much more serious understanding of learning, and a firm belief in the very idea that Chomsky rejected: that language evolved because it was functional. Most importantly, many of the theories are committed to a co-evolutionary view of the process, the idea that from a certain moment on, the processes of cognitive evolution and cultural evolution were entangled in a bidirectional spiral of influence (Deacon 1997, Tomasello 1999, 2008, Pinker 2003, Levinson and Jaisson 2006, Evans 2013, Evans and Levinson 2009, Hurford 2007, 2011, Richerson and Boyd 2005).

At their core, however, many of the theories are still formulated as attempts to answer the very question that Chomsky himself refused to deal with: what was it that happened to the human mind (or the human brain) that eventually allowed it to carry language? This is still a cognitive question, and the “mind” is still singular. A generally accepted research strategy is to look for linguistically relevant cognitive capacities—in humans and other animals—that may be shown to have been there before language, thus helping prepare the grounds for its emergence. Fitch (2010), for example, shows that we share components of the language capacity (in the FLB sense) with chimpanzees and other primates, whales and seals, birds and bees, and Hurford (2007, 2011) claims that the minds of the apes, our closest relatives, are in many different ways already language-ready. These explanations characteristically focus on those aspects of language-related cognition that are also involved in other human activities, and thus highlight the indisputable fact that the use of language requires many capacities that are cognitively general and multi-functional—and thus potentially shared with other species. Language did not appear in a cognitive vacuum, and in many ways it still rejects properties of pre-linguistic humans, some of which may be shared with the apes, with other mammals, with other social animals, or with other animals who have a nervous system.

There is much to be learned from this perspective, but it also suffers from three interrelated problems. First, not all the cognitive challenges presented by language are cognitively general and multi-functional. The most important challenges are actually specifically unique (Jackendo and Pinker 2005). The specific capacities that we have for lexical memory, for example, cannot be reduced to our general capacities for memory (long or short term), and the general auditory capacities that we share cannot explain the unique ways in which we compute linguistic sound. We are also anatomically and physiologically adapted to fast speech (Lieberman 1991, 2007). How could such capacities, uniquely dedicated to language, emerge before language itself? Second, the theories have eventually very little to say about the actual evolutionary dynamics that brought language about. They only attempt to show how the evolution of language became possible, not how it actually happened. Third, while Chomsky’s view fails to squeeze in through the bottleneck of evolution, the theories pass it too easily—so easily, in fact, that they raise the opposite question: if so much is shared with the other species, why haven’t they developed language? Why does Kanzi, for example, with all his enormous achievements, only manage to get to the brink of language (Savage-Rumbaugh and Lewin 1994)? As Bickerton (2008, p. 288–289) puts it in his review of Hurford (2007), the belief that “the cognitive and communicative capacities of great apes brought them to the brink of language” implies, among other things, that “minor improvements in ape cognition and communication gradually accumulated until some progenitor of humans became ‘language-ready,’ so that the actual transition to language was no big deal.” But if this is the case, why did the improvements only accumulate in one species?

(W)hy not in all, or at least one or two others? Why are these improvements not continuing in modern apes, so that we can observe them in action? Why is it that while we have thousands of complex languages with convoluted structures and tens or hundreds of thousands of words in each, they have communication systems resembling those of birds and sh? Why, while we are making moon-landings and sonatas are they still fishing for termites and cracking palm-nuts? (p. 289)

The answer I would like to propose follows a major line of thinking in the general field of human evolution (cf. Suddendorf 2013) that has recently been gaining ground in language evolution too: the real story is social-technological, not cognitive. What separates us from the apes is a sequence of social and technological revolutions—one major change after the other in the life experiences of human communities. The emergence of language was one such revolution, not the last and definitely not the first. It was preceded by an entire history of revolutions, all of which brought human communities to the point where they could invent language—as yet another feat of collective genius. As language gradually established itself in human communities, individuals began to be selected for the capacity to meet the growing challenges of the new technology. It was the collective invention that eventually shaped the cognitions of its users, not the other way around.

10.1 Re-Formulating the Questions of Evolution

The new perspective is founded on a major theoretical re-arrangement of the two issues we are interested in. The question of the evolution of language is no longer a cognitive question: it has to do with the evolutionary history of the technology—its invention, development, propagation, and diversification, the social contexts within which it emerged in ancient human communities, the ways it changed society once it was established, and so on. It is a question about the social-technological development of humanity. The question of the evolution of human minds (in the plural) and their relations with the emergent technology is thus secondary: it has to do with the involvement of individual human minds in a technologically-driven process.

Unfortunately, most of what we would like to know about the social-technological evolution of language is probably buried forever in the past. Where was language invented? When exactly did that happen? How many times, and in how many communities, was language invented, then forgotten, then reinvented, before it stabilized in some communities as a regular element of social life? How long did it take before language reached the moment of universal spread? How long ago did we still have communities of non-speaking humans? We may, however, make a number of assumptions, all of which are based on what we know about the evolution of other technologies (Arthur 2007). We know that first prototypes, the first versions of a new technology that actually work, do not look anywhere close to the final version of the technology. The first prototypes of language were much less complex, much less sophisticated, much less efficient, than the language of the world as we know them today. We know that necessity, not just capacity, is the mother of all invention: the absolute need to solve a problem inspires an exploratory process that eventually stumbles upon a good-enough solution, which is often identified as such only in retrospect (and because of that, inventions always require a considerable amount of luck.) We know that the understanding of an invention requires an understanding of its social-technological context. Innovations never appear out of nowhere. They always have a past—older technologies that came to be modified and combined in new ways to solve new problems; old obstacles that had to be removed; or old problems that came to be solved by new means. The context, moreover, heavily determines the future of innovations: many pre-conditions have to be met for innovations to eventually be accepted, stabilized, and propagated. And we know that innovations further evolve together with their contexts: to the extent that an invention proves to be as efficient as it promised to be, it actually begins to change its own environment; the modifications to the environment call for new functions, which effect changes in the technology itself; and so the technology and everything around it come to be entangled in co-evolutionary spirals.

The first prototype of an invention, any invention, makes a difference because it makes certain things possible that could not be done before—at a very rudimentary level of success. As the first users begin to actually work with the prototype, as they start to accumulate their experiences as users, they begin to learn things about the interactions between the invention and the environment that could not have been known before. They gradually understand more about the capacities of the technology and the ways it should be used, and they discover some of the problems (totally new problems) that arise as the technology interacts with its environment. For a long time, and very often unintentionally, different users thus introduce many slight modifications to the prototype (and to the way it is used), which in their turn have a cumulative and gradual effect on the general efficiency of the system. The effect is quantitative: the system remains the same, it just gets better. In some cases, this long line of accumulated improvements eventually leads to the stabilization of the invention in its final form. In other cases, the quantitative process eventually translates into a qualitative effect—a technological revolution. The revolution may occur as a direct result of the gradual accumulation of modifications. The system reaches a critical point at which totally new patterns of usage are all of a sudden made possible. It may also occur because of the accumulation of problems. New problems arise as the system gradually improves, and a new set of modifications, which emerges as an attempt to solve these problems, turns out to open up entirely new functional capacities. Sometimes, the revolution is made possible only when the environment (physical, social, and technological) changes in a specific way. This way or the other, the system that eventually emerges from all this should be properly thought of as the first prototype of the next generation of the technology. It is qualitatively different from the first generation in its architecture, and it can do things that were outside the functional envelope of the first generation—again, at a very rudimentary level of success. Consequently, the system now enters a new phase of gradual evolution, in which new capacities and new problems arise and many slight modifications are introduced. Then another revolution occurs, the next generation of the technology appears, and so on and so forth. The process goes on as long as it remains both necessary and possible. All the complex technologies that we use today are the products of such evolutionary histories. They are all related to some original invention through a long and complex line of revolutions and gradual modifications. There is no reason why language should be an exception. Finally, we know that technological systems, as they evolve, impose more and more system constraints on the possible venues of their own future evolution. The system acquires a certain specificity that gradually makes certain types of changes more difficult to incorporate, and thus in effect participates in directing its own evolution.

It should now be quite clear why the question of the evolution of language as a social technology should be dealt with before the question of the evolution of speakers can even be considered: throughout the entire process, at different stages of the evolution of language, different human individuals, who occupied different positions vis-à-vis the evolving technology and vis-à-vis their groups, found themselves facing different cognitive challenges. Between the linguistic leaders—the inventors and developers of the invention—and all those who could not yet handle the technology, there were learners and imitators (with different capacities), co-operators (in different social positions), speakers and listeners (more or less competent), passive listeners and eavesdroppers, and probably, from a certain point on, multi-linguals, liars, and translators. Each had to be able to do different things. We need to understand these different challenges, and then ask: How did these individuals manage to cope? How did they recruit the required capacities?

Once we frame the question this way, two points become obvious. First, the cognitive challenges of language, throughout its evolution, grew and developed along with it. The use of the first prototype in its original form required certain sets of capacities; the more advanced versions of the prototype gradually required more of these capacities; the first revolutionary advance in the system brought with it new challenges that required new capacities, which were then gradually stretched as the new generation of the technology entered the phase of gradual growth, and so on and so forth. The cognitive challenges of the first speakers of language were not our challenges. The challenges themselves evolved. Second, not everybody who should have met the challenges of language, at any of the stages, actually did. Many along the way probably failed, and many others only met some of the challenges, at different levels of success. This is what happens with every technology. From the very beginning, then, the existence of language created new patterns of variability between individuals—in their roles as language users—patterns of variability that asserted themselves as such for the first time. Some of the variability probably emerged from the social positioning of the different individuals vis-à-vis language: the level of their exposure to the system, their status and rank, their social ability to actually use language to their advantage. Some of the variability, however, must have been causally connected to variability in cognitive capacity—some of which must have been related to parallel patterns of genetic variability. Other things being equal (and they never really are), we may assume that those who found the technology easier to learn, handle, and improve—those whose cognitions and genetics were more suited for the task—earned more dividends from the system (both individually and collectively), and were gradually selected over those whose cognitions were less compatible with the technology.

The question, however, still remains: where did those individuals who did manage to meet the challenges of language, at the different stages of its evolution, and the capacities to do it? The traditional, cognitively oriented answer would be that the relevant capacities must have already been there before: behavior is made possible by pre-existing capacity, and because of that, the explanation for the capacity should be sought somewhere else, away from language and its challenges—in the structures of the brain (or the mind) and the structures of our genes. Recent advances in evolutionary theory, however, allow for a very different answer: the capacities evolved together with language—for language. First we invented language collectively, then language changed us individually. To see how this is possible, we have to change something fundamental in our perception of the general relationship between behavior, capacity, genetics—and innovation.

10.2 Innovation, Behavior, Capacity, and Genes

In the last decade or so, many of the most sophisticated new theories in evolutionary biology, especially in the domain of Evolutionary-Developmental Theory (evo-devo), have been informed by the understanding that behavioral innovation plays a much more important role in the evolution of biological species than has previously been assumed (West-Eberhard 2003, Jablonka and Lamb 2005). Let us consider the simplest scenario of evolution: a number of individual members of a biological species, which are fairly reasonably adapted to the conditions of their environment, who all of a sudden find themselves losing ground because the environment begins to change—bringing with it new problems to which the individuals have not yet had the opportunity to adapt. In the traditional, gene-centered view of evolution, these individuals have no choice but to go on behaving as if the environment has not changed (their arsenal of possible behaviors has already been genetically fixed), wait (so to speak) for a helpful genetic mutation, and let natural selection determine their fate. This, however, is not what actually happens. Biological organisms react to environmental changes and launch processes of exploration in which they try all kinds of behaviors they have not been genetically adapted to before. Terrestrial mammals are not adapted to swimming, but when they find themselves surrounded by water they nevertheless do what they can to keep afloat. When the environment gets colder, animals look for shelter in places they have never entered (or even noticed) before. The chaotic nature of this process is most clearly revealed when we think about it in the most down-to-earth terms: the new environmental conditions raise the level of the animal’s stress, and stress brings about behaviors that are unorderly, exploratory, accidental, and sometimes even frantic (depending on the severity of the threat). Most behaviors do not help much, but from time to time an animal stumbles upon a behavior that actually lets it survive, at least for a while. In these cases, the animals survive not because they were already genetically prepared for the new circumstances, but the other way around: they survive because they were capable of behaving outside the confines of their genetically selected-for behavioral envelope.

Biological organisms, then, are adapted in different ways to their environments, but way and above these adaptations they are also adapted—to different degrees and in different ways—to the foundational fact that their environments keep changing. They are capable of innovation. In the biological literature, this capacity is referred to as the capacity for plasticity. There is, of course, a genetic foundation for the capacity of plasticity, and different species are capable of different types and levels of innovation. This fact will come to play a major role in our analysis. It is also crucial to understand, however, that this genetic foundation has very little to do with the actual behavioral products of the exploration processes. Innovating organisms are genetically prepared for the search for behaviors that break the specific, genetically selected-for mold of the regular patterns of their lives. The innovations themselves emerge from the search process itself.

Luckily stumbling upon a successful behavior, however, is only the beginning of a much longer process—in which the new behavior has to be stabilized as part of the behavioral arsenal of the organisms. The organisms have to identify the successful behavior, isolate it from the other accidental behaviors that were not helpful, understand something (at the relevant cognitive level) about the causal connection between the behavior and its functional output, learn how and when to initiate it systematically—and, eventually, get used to it. Different species, again, are different from each other in terms of their ability to stabilize a new behavior. Assuming, then, for the sake of simplicity, that in our scenario the environment changes and then stabilizes again with a new set of conditions, those individual members of the species that would survive the ordeal, regardless of their genetic makeup, would survive on the basis of what they actually managed to achieve with their innovations—everything that they managed to learn and apply in their relationships with the new environment.

The next step in the argument takes things to an entirely different level of complexity. Assume that some of our innovative individuals manage to survive in the new environment—and eventually multiply. Their offspring will now be born into a world in which the new stabilized behavior is simply there. They would not have to re-invent. They would have to learn. None of the offspring would be already genetically adapted to the task: the capacity for the innovative behavior of their parents was not passed on to them in their genes. The fact that the behavior is now already in their world, however, would radically change the way their genes would express themselves in the process of their ontogenetic development. The young organisms would have no choice, in the course of their development, but to launch an exploratory process of their own, recruit as many of their genetically given capacities as possible (capacities that evolved for other purposes), combine them in innovative ways—and attempt to do whatever they can, with the tools they have, in order to master the behavior. In the course of this effort, then, a totally new pattern of cognitive and genetic variability among the learners would be exposed—variability in the types and the qualities of the genetically given capacities that the different learners can recruit, and combine, for the new learning task.

This variability, in its turn, would assert itself in two complementary ways. First, different learners would eventually adopt (and stabilize) different strategies for the learning task. This is so, because the different learners would rely on different capacities, of different qualities, and would thus have no choice but to attack the learning problem from different angles. Second, different learners would eventually master the behavior to different degrees. Some would find the learning task relatively easy, many would find it challenging but possible, others would find it very difficult, even impossible. To the extent that the behavior remains obligatory for survival, these differences would reflect themselves in the ability of the learners to multiply—which means that they would also be reflected in the patterns of genetic distribution in the next generation. Specific combinations of genes, which only came to be functionally related to each other because they were recruited for the new task, would now be selected for—and the next generation would actually be more genetically adapted to the behavior (only more adapted, never totally). This process is called in the biological literature genetic accommodation: genes accommodating themselves to innovations (West-Eberhard 2003), capacities accommodating themselves to behaviors. Not the other way around.

In the simplest scenario, then, the process that leads from the change in the external conditions of life to the change in the distribution of genes across the population involves necessity, exploration, luck, innovation, stabilization, and then learning, exploration again, recruitment and re-combination, exposure of cognitive and genetic variability, strategy stabilization, and then, eventually, genetic accommodation. It is a very different process indeed from the one envisioned in the traditional, gene-centered conception of evolution. Note that none of this implies that the more traditional mechanisms of evolutionary change are no longer there. Mutations, and other molecular changes such as genetic reshuffling, still occur and remain crucial. The point is that the innovations have a direct influence on the way the products of these genetic changes end up expressing themselves. They change the general pool of genetic variability, and are selected for, or against, on the basis of their contribution to the effort of invention and stabilization.

Finally, as Jablonka and Lamb (2005) show, once a process of this type is launched, and as long as certain conditions are met, the emergence of new capacities may lead to the further development and refinement of the innovation itself. As noted, the capacities for plasticity manifested by different species (and different members of different species) vary—but they all share a common property: they are all finite. The capacity for invention, and for the learning process that follows, is never completely open-ended. The further an innovation is from the envelope of the already-adapted behaviors of the individual, the more difficult it would be for the individual to invent and stabilize it. Because of that, individuals of later generations, who have by now adapted themselves, at least partially, to the behavior invented by their ancestors, would now be able—if required by necessity—to invent and stabilize behaviors that were outside the capacity of their ancestors. Such additional innovations, to the extent that they prove useful, would launch another process of learning, exposure of genetic variability and eventually genetic accommodation, and then more innovation—as long as necessity is there. Jablonka and Lamb refer to all this as the assimilate-stretch dynamic: innovative behaviors become easy to accomplish because of genetic accommodation, so individuals can stretch their behavioral envelopes by further innovations, then assimilation occurs again, and so on and so forth. The evolutionary paths of the innovation and its users find themselves entangled in co-evolutionary spirals.

It is crucial to see, then, that in the course of this complex process new capacities emerge that are not just re-combinations of capacities that were already there before. Behavioral innovation produces cognitive novelty. New behavioral patterns are forced into existence by necessity; they are gradually carved by experience to approximate their specific functional goals; they become objects of learning, and eventually mold capacities in their shape. It is thus not the case that behavior is based on already existing capacity: capacity actually emerges from behavior. We are never already capable before we begin. We gradually become capable as we try. Skill emerges from practice, not the other way around. Quite obviously, new capacities are never totally unrelated to their past: pre-adaptations play an important role in the story. New capacities, however, emerge from the interaction—made possible by plasticity—between old capacities and new necessities, and because of that, they are never just reflections of their past. They really are new.

Based on this general perspective, then, Eva Jablonka and I have developed a principled model of the co-evolutionary dynamic of language and its speakers (Dor and Jablonka 2000, 2001, 2010, 2014). As we show, pre-linguistic humans must have already been ready (socially and cognitively) for the beginning of the process of exploration that eventually gave birth to the first prototype (or prototypes) of language—but they did not have to be cognitively language-ready before language came into being. Language was, in all probability, invented before its speakers were fully prepared for it. It was born out of necessity, between human individuals, on the basis of plasticity, and its technological evolution was the driving engine of the entire process. The cyclical collective dynamics of invention, negotiation, propagation, habit formation, and conventionalization (and then more inventions, and so on and so forth) remained ahead of the individuals who were involved in it. Speakers were struggling to keep up with language, and whenever they managed to adapt to it, it was already somewhere else, further down the road of evolutionary development. In the course of the process, language itself gradually developed into a highly specialized system, with unique technological properties. Consequently, acquiring sufficient skill with the system came to require unique capacities, which means that human individuals as we know them today do indeed have a cognition that is partially biased toward the acquisition and use of language. We do have innate capacities for language, but these capacities are derivative, emergent, variable, and partial—not constitutive, foundational, universal, and complete. Innateness is a posteriori, not a priori.

And it is not only cognitive—it is also emotional (Jablonka, Ginsburg, and Dor 2012). The deepest indication that we are by nature destined to participate in the social activity of language is not the fact that we can do it—but the fact that we need it, that we crave it. Throughout the evolution of language, individuals were not just selected for the capacity to participate in the activity of language—but also for their will to do it. Those who were more deeply attracted to the evolving technology, who were more desperate to understand and to talk, who longed for mutual-identification—all these simply spent more time around language; instead of doing other things, they spent more time mutually identifying from an earlier age, and invested more energy in the acquisition of their linguistic skills. Their fascination with language thus increased their chances of survival, and was thus partially genetically assimilated. We are already born with (different levels of) this fascination. Our minds are language-craving. This is why, as children, we actively look for language. Not because we already know it (or parts of it), but because we want it. Obviously, we also crave experiential communication, and in this sense, we are not very different from many other species. Other animals, just like us, can feel lonely without social contact. Only humans, however, are born with the hunger for the type of social contact that can only be achieved by mutual-identification.

10.3 The Pre-History of Language

It is on the basis of this conception of the evolutionary dynamics that we may now try to delve deeper into the actual process. Our first task is to figure out as much as we can about the social, technological, and communicative lives of pre-linguistic humans, to find clues to the context that made the invention of language both possible and necessary. In 2011, Chris Knight, Jerome Lewis, and I invited twenty-four scholars, from a wide array of disciplines, to discuss this question in an intensive workshop in London. The volume based on the workshop, Dor, Knight, and Lewis (2014), is the first collective attempt in the literature to systematically construct a synthetic picture of pre-linguistic social and technological life, rich and detailed enough to shed light on the social origins of language. We, as modern humans, are so used to language that we find it difficult to imagine social life without it, but the emerging picture of our pre-linguistic ancestors reveals a level of social, technological, and communicative complexity and sophistication much closer to our own than to ape societies. There is a sobering lesson here: being human is not all about language.

For a million and a half years or more, before the emergence of Homo sapiens, archaic human communities had already been constantly moving away from the ape-like societies of their ancestors toward human social life as we know it. At least two hominin species were involved—Homo erectus and Homo heidelbergensis—and although it is hard to figure out exactly which species contributed what to the process, there was a clear common thread: throughout the process, human survival gradually came to depend less on individual behavior and more on collective co-operation. Individuals gradually came to depend more on others. is dependency required more and more communication— still at the level of experiential presentation, still within the here-and-now of the communication event—and it required individuals to become more and more socially sensitive. Higher levels of communication and sensitivity allowed for further social and technological developments, and so on. Pre-linguistic human societies were already deeply entangled in a unique spiral of social, technological, communicative, cognitive, and emotional co-evolution—and it was from this spiral that language eventually emerged. Different scholars in the field of human evolution highlight different facets of this process, sometimes presenting them as singular explanations for the entire drama, but it is important to remember that all facets were also connected to each other in what Sterelny (2012) calls feedback loops—re-inforcing, directing, and shaping each other. Hominin societies were changing in all possible ways, none of which can explain the process in isolation from the others.

Wrangham (2009) highlights the control of fire and the invention of cooking. As he shows, cooking dramatically increased the amount of energy ancient humans obtained from their food, and consequently changed human life at all the relevant levels. We adapted to cooking anatomically and physiologically—our digestive systems got smaller, our jaws weaker—and the reduced costs of feeding left much more energy for brain growth. Our ape relatives spend around six hours a day chewing raw food; our cooking ancestors had much more free time. They ate around the fire, which turned into the central site of social and cultural life. Most importantly for our present purposes, cooking created a new division of labor between the sexes: men hunted; women gathered and cooked. Released from the need to spend most of the day eating, the men could now concentrate on bringing more meat back to the camp. As a result, men and women came to depend on each other for their subsistence. Based on a wide array of evidence, Wrangham identifies the invention of cooking with the rise of Homo erectus, about 1.8 million years ago.

Hrdy (2009) tells of another story that probably unfolded in the days of Homo erectus. Erectus babies were already taking much more time to mature than their ape peers, and they required constant feeding and protection for a long time after weaning. From a certain point on, “human mothers began to bear o spring too costly to rear by themselves” (p. 283). What evolved was a social arrangement unique among the primates—alloparenting. Babies began to be taken care of collectively, not just by the mother, but also by fathers, grandmothers, and other family members. This created a complex web of practical and emotional dependencies, between mother and alloparents, between alloparents and children. As Hrdy shows, the emergence of co-operative breeding must have been a major driving engine behind the evolution of human cognitions and emotions as we know them today—the unparalleled will and capacity to figure out what others are thinking and feeling, empathy, the uniquely human feelings of shame and guilt (cf. Jablonka, Gins- burg, and Dor 2012).

Sterelny (2012) and Tomasello et al. (2012) concentrate on yet another revolutionary development: the emergence of collaborative foraging, especially the collaborative hunting of big game—a complex and risky endeavour that requires high levels of group planning and co-operation on site, but also promises more dividends for the individual hunter than he could expect to gain from hunting alone. Collaborative hunting is clearly evidenced in Homo heidelbergensis, around four hundred thousand years ago, but it may also have earlier beginnings. It created new types of dependencies, and it probably contributed much to the emergence of the uniquely human sense of group identity that is based on the sharing of food. Most importantly, it required much higher levels of skill, and gradually came to depend on advances in tool manufacture, from stone knives to projectile weapons. As Sterelny (2012) convincingly shows, all this must have gradually created a totally new type of dependency: the survival of the group came to depend on the ability of experts to pass their knowledge to the young. What emerged was a regime of apprenticeship, in which adult experts began to actively intervene in the learning processes of the young and organize their own activities in ways conducive to learning. This was the beginning of pedagogy, a uniquely human social activity.

It is easy to see how these dynamics and others, once launched, would spiral together very quickly and begin to re-inforce and shape each other. It is also clear that they all require more and more communication, more and more information sharing. What this suggests is that ancient human societies, for a very long time before language, must have already been raising their levels of communication in revolutionary ways. The best discussion of this side of the drama is still Donald (1991). As he shows, Homo erectus societies must have already turned from episodic to mimetic. Ape societies are episodic: communication between individuals, as sophisticated as it is, is still unreflective, concrete, and situation-bound. Erectus societies added an entirely new dimension to the episodic modes of communication they inherited from their ancestors: mimetic communication combines mimicry, imitation, gesture, tone of voice, facial expression, bodily movement, and eye contact to produce intentional and reflective communicative acts. It includes, quite simply, everything we use in a game of charades. It was exactly this new mode of communication that allowed human societies to meet the growing challenges of dependency. Donald demonstrates this with the all-important development of pedagogy. The manufacture of erectus tools of the more advanced, Acheulian type, requires months of training. The skill cannot be learned just by imitation, and mimesis is still, today, the most efficient way to teach it: going through the motions more slowly, intentionally freezing at different points in the process, pointing at this facet or other of the work, expressing frustration and satisfaction, and so on and so forth. As we saw in chapter 2, this is also the case with all other manual skills that we learn and teach. Donald also highlights the significance of mimesis in the communication of emotions, private and social—in such revolutionary forms of expression as pantomime, mimicry, music (mainly singing), dance, and ritual. All these, of course, are still extremely important in our lives as modern humans. Donald’s argument strongly suggests that they are much older than language. Much of the work reported in Dor, Knight, and Lewis (2014) shows that, in all probability, this is indeed the case. Language was born into a human social world already suffused with polymodalic communication (Kendon 2014, Lewis 2014).

The new means of communication allowed for the transfer of knowledge and identity between generations, and thus for the emergence of Tomasello’s (1999) ratchet effect—the stable accumulation of innovations. They participated in the achievement of the levels of trust required for collective work (Knight 2014), in the establishment of normativity (Zlatev 2014, Lamm 2014), in the emergence of new forms of play-and-display (Whitehead 2014), including the display of the self, and in further, ritual-based developments in the relations between the sexes (Power 2014, Watts 2014). Most importantly, they allowed for a revolutionary rise in the very ability of human societies for collaborative innovation (Dor and Jablonka 2014). Research shows that apes innovate too (McGrew 1992; Whiten et al. 1999; Yamamoto et al. 2008), but they very rarely do it together. The process of collaborative innovation, in which different individuals, with their different experiential perspectives on the problem, work together to find a solution that none of them could solve alone—this is a uniquely human capacity (Glăveanu 2011).

All this makes for an extremely complex and dynamic view of pre-linguistic societies, but I would like to claim that at a more abstract level, everything that was happening revolved around one thing: the collective effort of experiential mutual-identification. Our pre-linguistic ancestors managed to achieve what they did because they spent enormous amounts of collective effort in the struggle for mutual understanding, mapping the differences and similarities between their experiential worldviews, learning from each other and teaching each other. They gradually spent more and more of their time doing things together, solving problems together, sharing and comparing experiences. This was the first revolution that made us who we are. The apes have a theory of mind: they understand that the other may have a different picture of the world, and they are capable of reading the other’s intentions, follow the other’s gaze, and so on. This, however, is still an individualistic capacity. The human breakthrough was the upgrade of this capacity into a collective, mutualistic, dialogical capacity—and its establishment as the single most important determinant of human life.

As far as the invention of language is concerned, the implications of this perspective are much more technical: it explains why, at a certain point in time, the invention became both necessary and possible. The invention became possible because experiential mutual-identification is the machinery required for the construction of language. Humans had already been systematically and efficiently mutually identifying their experiences. What was required was a radical change in the use of something that was already there. The invention became necessary because the growing dependency on experiential mutual-identification locked humanity in a vicious circle—an extremely beneficial circle but a vicious one nevertheless: the ever-growing dependency of the community members on mutual-identification required a constant rise in the amount and quality of the information that could be shared and compared among the group; the rise in information sharing, however, only contributed to the deepening of the dependency. We are still trapped in this circle: the collective understandings that we manage to establish are always a few steps behind the collective problems we have to solve. This vicious circle, I would like to suggest, eventually forced pre-linguistic humans—Homo erectus or Homo heidelbergensis—to begin their explorations into a new realm of communication. They had already expanded the functional envelope of experiential communication to the maximum, but they needed more. Everything that could be shown was already shown, but this was no longer enough. They had to find a way to do what no other species could even dream about: communicate what could not be communicated experientially. The goal, then, was determined by necessity; the capacities and machinery were already there. The only thing still missing was the functional strategy.

10.4 Crossing the Rubicon: From Experience to Truth

With all their revolutions, pre-linguistic humans were still living in a social world defined by the here and now. Communicators could systematically negotiate their experiences only if both or all of them managed to experience them together. Handling situations in which the thing to experience was outside the experiencing range of the interlocutors remained beyond the functional limits of the entire system. With bodily and vocal mimesis, they helped each other experience—you see, you point, I see what I hadn’t noticed myself, we look each other in the eye and acknowledge—but this they still did as perfectly experiential animals. Like all other species, they only knew how to follow their own senses. So, in the simplest scenario, if individual A pointed at x (a prey, a predator, other people, fire) and accompanied the pointing with some mimetic sound associated with x, and if individual B saw A pointing, looked in the direction pointed at and identified x, then all went well: mutual-identification has been achieved. But if x was positioned outside B’s field of vision, the act failed. If anything, it widened the experiential gap between them.

Turning such failures into success was exactly the pressing challenge. It must have emerged slowly but consistently, in more and more severe instances of epistemic dependency (Dor 2014), situations where (i) A experienced something that called for action, but he or she could not act alone on the basis of the experience; (ii) another individual, B, was in a position to act but had not experienced the call for action; and (iii) the survival of both depended on A’s capacity to get B to do what was needed. The challenge of epistemic dependency required a radical change of attitude: the failure would turn into success if B managed to interpret A’s communicative act not as an invitation to experience—but as an invitation to imagine.. B would have to understand (without words): “A is intentionally at- tempting to turn my attention to something by pointing. His or her vocalization indicates that it is of the type x. As for myself, I cannot see anything there. I will, however, choose to go against my own experiential judgment, believe A’s experiential judgment, imagine there is something there of the type x, and act upon my imagination.” For me, this was the essence of the linguistic revolution: the emergence of the will and capacity to imagine what you cannot see with your own eyes, simply because you believe somebody else.

This, I would like to suggest, is what the inventors of language began to experiment with: not the construction of a new system, but the use of the old tools of experiential-mimetic communication for a new type of communicative function—based on experiential trust. They were trying to use everything they already had in order to refer to unshowable experiences as unseen instances of what was already familiar: “We all know x, and there’s one of them, there, where you cannot see.” Whenever they managed to pull this off, they actually turned their mimetic signals into proto-linguistic signs, still holistic and analogue, but already performing the task of instruction. This is why, in the beginning, there was probably nothing perceptively different about the explorations: they looked and sounded like regular events of experiential-mimetic communication. The new function, however, must have asserted itself quite quickly as a revolution. The epistemic reach of the inventors of language began to expand beyond what they experienced themselves—alone or together. More and more elements of the world began to penetrate their worlds from the outside: things they had not experienced by themselves (alone or together), but had been told about. For the first time in the evolution of life, humans began to experience for others, and let others experience for them. The consequences were enormous, both in terms of the growing set of practical challenges facing the innovators’ communities, and in terms of everything social. The Rubicon of experience was crossed.

This, however, did not come without a price. As long as communication is experiential, the interlocutors always maintain the capacity to verify the communicated meaning at the time of the communication event, to see with their own eyes. When immediate verification is always possible, the principled problem of truth does not arise. It was exactly this sense of experiential confidence that the inventors of language, in their explorations into the instruction of imagination, had to sacrifice: the new function was based on the replacement of knowledge with belief. It is only when we begin to count on what other people tell us that we seriously begin to wonder: is this right? Are they telling us the truth? Could they be mistaken? Is this a lie? The formal semantic understanding of truth as the relationship between linguistic propositions and the world thus captures something very deep (as long as we remember to replace the world with the world of experience): the problem of truth as we know it was born together with language. It changed humanity forever. Among other things, it created new types of social emotions (and new ways to manipulate them): truth-related anxieties such as doubt and suspicion (Jablonka, Ginsburg, and Dor 2012). Hundreds of thousands years later, modern humans began to come back to this foundational relationship between language and truth, knowledge and belief, anxiety and doubt—this time as reflective thinkers trying to understand themselves. This was the birth of philosophy.

10.5 Toward a History of Language

If all this is on the right track, it allows for a very specific hypothesis concerning the further evolutionary development of language and its speakers, from the stabilization of the first exploratory beginnings to the stabilization of the technology of language as we know it today. The entire process was pushed forward by the constant need to raise the levels of success in instances of instruction. The function gradually shaped the technology, not the other way around: it forced the emergence and further improvement of components and properties of the technology; it created new problems that had to be solved; it directed the path of development from beginning to end. Language as a specialized, autonomous technology, with its fundamental characteristics, was the final result of this process—but much of it was already in the cards at the moment of origin. In the following, then, I would like to tell the story the way I see it. It is speculative, of course, it must be, but I believe it captures something of essence about the causal chain that took us from there to here.

Let us, then, get back to the inventors of language and their heirs, who had already stabilized the new function and were now seeing their worlds begin to expand. What did they have to do in order to increase the efficiency of their new technology? Two major challenges clearly asserted themselves—on both ends of the communication process. The first had to do with meaning. From the moment they stabilized the new function, the users of language had a vested interest in every minute advance they could achieve in the mutual-identification of their experiences: the mutual-identification of another sign, pointing at a new type of experience; the dissection of the experience into mutually identified components; the mutual-identification of causal relations between experiences, and so on. Every advance immediately allowed for the instruction of imagination into further fields, in more precise ways. Very slowly at the beginning, and probably faster later on, the first generations of language users must have begun to spend yet more collective energy on the process of experiential mutual-identification, dissection, and categorization—this time, however, for instructive communication. Gradually, it became clear that the instructive function demanded nothing less than an entirely new outlook on the world of experience: looking at the world in order to behave in it is not the same thing as looking at the world in order to tell about it. Many experiential domains, which were of little interest to experiential communication, began to enter the picture for the first time. The physical terrain, for example, is always given as part of the context of experiential communication. Everything in it can be pointed at. The direction of interlocutors to places they have never been to, however, requires an entire project of classification, and eventually the creation of a new semantic field. All this, then, was the beginning of the symbolic landscape.

The second challenge had to do with the old vocalizations and gestures used by the first speakers for the instruction of imagination. On the analogue continuum of experiential communication (vocalic and manual, pre-mimetic and mimetic), the physical and emotional variability between individuals is highly functional. It is meaningful. The instructive function, however, demands that all speakers mutually identify the same gestures and vocalizations for the same mutually identified experiences—transcending individual differences. Under such a demand, then, every minute change in the arsenal of vocalization and gestures would be selected for if it provided higher levels of perceptual distinctiveness— and thus minimized the probability of confusion. As Zuidema and de Boer (2009) show, the accumulation of such changes would eventually produce a categorical and combinatorial phonetic system. Zuidema and de Boer stress the fact that the process requires a significant level of noise: without it, the probability of confusion is too low. This is important, because the entire process was indeed embedded from the very beginning within the very noisy world of experiential-mimetic communication. The challenge was not the construction of a sound and gesture system out of nothing: it was the isolation of a distinct sound and gesture system from the analogue continuum of experiential communication. What this means, in simple words, is that instructive interactions gradually began to sound differently. This was the beginning of phonetics.

For a very long time, then, the same thing was happening on both sides of the communication process. At the level of meaning, the mutually identified worldview of the symbolic landscape was beginning to demarcate itself from the experiential worlds (private and collective) of its speakers. At the level of form, increasingly phonetic vocalizations were beginning to demarcate the sounds of language from the sounds of experiential-mimetic communication. On both sides, the function of instruction was beginning to push language toward autonomy.

At a certain point in time, then, everything already established must have allowed for a new way to raise the general efficiency of the technology: some innovative speakers began to experiment with the concatenation of linguistic signs into longer and longer strings. For proponents of combinatorial syntax, this is a rather trivial development (Jackendo 1999, Bickerton 2009), but I would like to suggest that the emergence of concatenation was actually revolutionary—in two complementary ways. To begin with, it presented listeners with a radically new challenge: they were no longer required to bring up from their memories clusters of experiences that were associated with mutually identified signs. They were asked to imagine the experiences associated with the sounds, and then calculate the intersection between them: concentrate on chasing-experiences, and on rabbit-experiences, and then calculate the experience of rabbit-chasing. This was revolutionary mainly because, to the extent that it worked, it allowed for communication about the intersected cluster of experiences (the cluster of rabbit-chasing) without the prior mutual-identification of the cluster itself. Speakers could now communicate not just about the experiences they had mutually identified, but also about different combinations of these experiences. This meant, among other things, that they could begin to invent imagined entities and talk about them. The cultural consequences were enormous. All this must have implied a great leap forward in the expressive power of the technology: the function from the number of signs to the number of messages, which was up to now a linear one, would turn into an exponential function. The dividends for the mutual-identification of new signs grew much higher. As signs came to be concatenated, again and again, only with certain signs but not with others, a network of semantic connections began to emerge. Very gradually, the socially constructed worldview of the symbolic landscape, which up to now included sets of isolated experiences, began to turn into a categorized system.

The second revolutionary implication of concatenation was linearization. The challenge of concatenation immediately allocated dividends to those speakers who could produce longer strings, maintain the clarity and coherence of their instructions, and do it faster. With the rise in speed, as signs came to be pronounced closer and closer together, phonological relations at the utterance level could begin to emerge, to allow for the swift move along the string of sounds. The listeners, for their part, had to nd ways to interpret the longer strings, calculate the intersections between larger sets of experiences, and also do it faster—to keep pace with the speakers. The emergence of concatenation, then, began a developmental process that gradually forced the emergence of internal complexity in the evolving technology—from the outside in. From the symbolic landscape and the phonetic system that had already begun to evolve, semantic and phonological structures began to emerge.

As the system grew in complexity, however, new types of problems began to appear. Speakers were gradually producing longer, more complex utterances, and these became more and more difficult to interpret. They were increasingly ambiguous—the concatenated signs could be re-arranged in different ways to produce different messages—and they would be increasingly more opaque: each of the signs was still being mutually identified as such, but the intersections, more and more complex, were not. This, together with other problems, must have gradually begun to require a collective effort of a totally new type—that of the mutual-identification of normative rules for the regulation of the actual process of linguistic communication. This was the beginning of the protocol. Speakers, in their constant attempts to understand and be understood, began to explore different options: norms of linear order, for example, adjacency and iconicity. Such innovations began to reduce the levels of misinterpretation, and sparked a new dynamic of collective exploration and stabilization of more formalized variations—more explicit standards for mutually identified behavior.

When they began to stabilize their protocols, speaking communities already had all the components of the technology in their rightful place. From now on, all the relevant evolutionary dynamics would spiral together. Every update in each of the parameters required accommodations throughout the system: the collective investigation of the world of experience; the ongoing expansion of the symbolic landscape; the further construction of a social-semantic worldview; the development of the dialectic relationship between this worldview and the variable experiential worlds of the speakers; the growing formalization of the sound system; the steady appearance of new communication problems; the consequent rise in the complexity and generality of the mutually identified norms invented to resolve them; the emergence of more and more complex utterances, produced on the basis of more complex clusters of prescriptions; and then, on the basis of all this, the discovery of more useful things to do with language, and new expansions, constructions, formalizations, and regulations, new problems and solutions—an endless process, inevitably leading human communities, and human individuals, toward greater and greater dependency on their own invention.

10.6 The Emergence of the Imaginative Species

The dating of the emergence of language is a highly contested issue in the discourse (see, for example, the fascinating debate between Watts 2014 and Dediu and Levinson 2014). What all socially minded scholars seem to agree on is that language began to emerge before Homo sapiens came onstage, in communities of Homo erectus or Homo heidelbergensis. In terms of the narrative presented here, this assumption makes perfect sense. These were the species that put us on track as a co-operative, inventive, technological, mutually identifying animal. It stands to reason that it was they who gradually found themselves confronted with the challenge of instruction. Like the genius collective inventors they were, they also found the solution.

In a very deep sense, however, the solution they found was probably already out of their league. They were expert experiencers, probably much better than we are, but language forced them to weaken their dependency on experience—and develop a worldview based on imagination. They were expert experiential communicators, in all certainty much better than us, but language gradually forced them to systematically suppress most of what they knew how to communicate. At every given moment, it took the entire collective genius of their communities to push the technology forward, but the individual speakers around language only managed to use it to variable degrees. As they began to be selected for their linguistic capacities—when language started to seriously change their selective environment—individual speakers joined the evolutionary spiral, and began to accommodate their cognitions, emotions, anatomies, physiologies, and genes to the technology. From this process emerged a new species adapted to language: Homo sapiens.

The new species adapted itself to language in the exact two ways predicted by my theory. The first was the emergence of cognitions, anatomies, and physiologies specifically adapted to fast speech (Lieberman 1991, 2007). Tongues were lowered into the pharynx, and brain circuits developed to control the increasingly sophisticated processes involved in speech production. In terms of the hypothetical narrative detailed above, this means that Homo sapiens either received the challenge of fast concatenation from his ancestors, or invented it by himself. Either way, the specialized anatomies and physiologies evolved for the use of a technology that was already there, strongly demanding higher capacities for efficient usage. Lieberman dates the emergence of the full capacity for fast speech between fifty to ninety thousand years ago.

It is around this time, maybe slightly earlier, that Mithen (2007) identifies the first archaeological clues, from new social activities and new material artifacts, for the second adaptation—full-fledged human imagination. Mithen begins his story with the capacity of theory-of-mind that we share with the apes and defines seven steps on the way from there to modern imagination. He mentions language, of course, but almost in passing. From the point of view developed here, however, language must have been the single most important determinant of the emergence of human imagination as we know it.

Other animals probably have to use basic imagination for the planning of action, and pre-linguistic humans developed the capacity further for their complex activities. In all this, however, imagination is only activated on the spot, by experiential problems that require the retrieval of past experiences. The imagined experience combines real-time experience with materials from memory, and the imagining animal has to calculate the relevant intersections between what it experiences at the moment and what it remembers. Language, however, requires something radically different: the construction of an imagined experience on the sole basis of the creative assembly of pieces of experiential memory—in isolation from real-time experience. It requires listeners to calculate the intersections between sets of memories. For the first time, imagination is activated independently of experiencing.

Homo sapiens emerged as an answer to the two most pressing challenges of language, but it was definitely not the only species that had to face them. Dediu and Levinson (2014) analyze a wealth of evidence for the claim that our sister species, Homo neandertalensis, had language too—a perfectly reasonable assumption if we agree that language was invented by the ancestors of both species (which implies that other descendants, like Homo denisova, may have also had language). Dediu and Levinson also show that the Neandertals may have developed language further: their sound-production anatomies seem to be more suited for the task, and their cultures show clear signs of imagination. Like us, for example, they buried their dead. Where they took their language, and what it looked like, we will probably never know. But if we assume that for a significant amount of time two (or a few) human species spoke, and if we assume that they also maintained contact in this way or the other, vestiges of the other species’ languages may actually be still incorporated in our languages. As Dediu and Levinson suggest, the amazing variability of our languages may reflect the influences of the different species’ languages on each other: “just as for genetics, Neandertals and Denisovans (and likely further archaic cousins) might be extinct as human lineages but continue to live in us through their genes and perhaps speak through us as well” (p. 288).

10.7 The Darker Side of Imagination

The growing human capacity for creative imagination turned us into who we are not just in positive terms. There was a darker side. At a certain point along the way, some of the more intelligent speakers must have begun to realize that the new technology could be used with a very different type of communicative intent—the intent to deceive. This was a moment of enormous consequences: the lie was born.

The intent to deceive as such was already there before language. Other animals deceive as well (Dawkins and Krebs 1978). Language, however, provided deceivers with a tool so much more powerful than presentational communication that it changed deception forever. Three interrelated factors were involved. First, experiential communication allows for the communication (honest or deceptive) of a much narrower set of meanings than language—those meanings that are anchored in the here and now of the communication event. With language, the set of possible meaning-types explodes—for honest communication as well as for deception: everything that has ever been mutually identified becomes a potential lie. Second, the nastiest characteristic of the lie is the fact that it is functionally based on the very trust it betrays: you can only lie to those with whom you share a language, and among those you can only lie to those who trust you to tell the truth. The very logic of language and the very nature of the process of socialization for language thus prepare the listeners for their unfortunate role as the potential victims of deceptive communication. Third, and much more important, is the fact that in presentational communication, communicators can only present their interlocutors with something that is there for them to experience. Communicators, for example, cannot threaten their interlocutors unless they really are frightening. Whatever is communicated may be verified or rejected by the others in real time, and because of that, presentational deception is a very difficult fit. (We still value it very much: the great actors that we admire are the best presentational deceivers.) This is why, in terms of Zahavi’s (1975) handicap principle, presentational signaling is heavy. Consequently, the apes usually deceive by hiding something that is there—not by trying to present something that is not. With language, however, the problem simply disappears. It allows communicators to tell their interlocutors about things that they cannot experience—and thus cannot verify or reject at the time of communication. Language thus deprives the listeners of the single most important tool that they could use to defend themselves against deception: the critical judgment of what they just heard on the basis of what they experience with their own senses.

Taken together, the three factors actually carry a rather amazing implication: the invention of language eventually did more to enhance the human capacity for deception than it did to enhance the human capacity for honest communication. The functional envelope of presentational deception is narrower than that of honest presentational communication, but the functional envelope of linguistic deception is wider than that of honest linguistic communication: language allows speakers to communicate mutually identified experiences external to the here and now, but as long as they are honest, they may still only communicate, at every given moment, those experiences they did experience: this is what honesty is all about. Honest speaking is bound by the contingencies of the experiential world of the speaker (both external and internal). In lying, however, the speaker is for the first time truly released from the bounds of experience: everything that can be said can be lied about. Language is deceivers’ heaven.

So much so, as a matter of fact, that it seems tempting to postulate that language was originally invented for lying—that it was born as a tool of deception. Everything said here so far indicates, however, that this could not possibly be the case. The collective effort of the invention and stabilization of the new technology must have been based on high levels of reliability and trust between the inventors: otherwise they would not have been able to get the system going. But when language was stabilized, when certain levels of trust for language were achieved, the door was opened—and some individuals rushed in. Because of that, the entire history of the evolution of language, beyond the original invention, must have been closely tied up with the function of the lie.

Theoretical models of the evolution of language usually think about the lie in terms of the more fundamental problem of the evolution of co-operative behavior. The argument runs as follows: The individuals involved in any collective project should not just be willing to share the collective gains of the project—they should also be committed to give their share of the effort. They should be willing to pay the price. For the project to survive, of course, the gains should be greater than the cost. The problem is that co-operative projects also invite free- loaders to the table: if you manage to get your share of the gains without putting in your share of the effort, you end up in an even better position than others. This is a rational strategy, which means that it should in principle be adopted by everybody. If it were, however, the entire project would collapse. Co-operation, then, is a reasonable individual choice only to the extent that the others are also willing to avoid freeloading. Everybody should agree to put some of their selfish interests aside. To explain the emergence of co-operative systems in evolution, we should find a way to theoretically control the phenomenon of freeloading. In the case of language, freeloading is lying. Language is based on trust, but once the trust is there, lying seems to be the most advantageous individual strategy. If everybody lied, however, the trust would collapse, dragging language down with it. Different writers thus try to control lying in different ways: human societies and individuals became more co-operative already before the emergence of language (Tomasello, 2008, 2009); honest communication was ensured by conformist learning and moralistic enforcement of norms (Richerson and Boyd, 2005); language evolved on the basis of a rise in social trust and the emergence of the rule of law (Knight, 1998, 2008); societies managed to win the war against individual deception by the invention of the collective lie (Knight 1998); language evolved as a kin-selected system, which ensured honest communication within the kin group (Fitch, 2010); and more.

All these explanations are undoubtedly important, but they also seem to betray an implicit universalist assumption: the option of freeloading is equally open for all individuals. This, however, could not have been the case. At every point in the evolution of language, individuals were different in their language-related cognitive capacities, their emotional makeup, and their social status—and each of these carried implications for the individual’s ability to either lie and get away with it, or detect a lie and make sure that the liar was punished. Lying requires more emotional control than telling the truth: liars have to prevent their faces and bodies from betraying their intentions (Vrij, 2001). An individual’s ability to lie and get away with it, as well as to punish a liar, is also dependent on his or her social status: other things being equal, higher status guarantees more immunity and more control. The consequences of getting caught lying are often less intimidating than actually telling the truth (DePaulo et al. 1996). Most importantly, lying is a more complex cognitive activity than honest speaking, and lie detection is more complex than simple comprehension, both requiring additional cognitive processing (Spence et al., 2004). This is also evidenced in the very gradual development of children into full-fledged liars (Smith and La Frenière 2013).

All this carries a simple implication: the drama of the lie should be read as a variable story. Not everybody lied, not everybody lied efficiently, not everybody lied to everyone else, and not everybody who lied got caught. More than anything, the first liars must have been among the most imaginative speakers in their communities. In honest linguistic communication, the speaker’s intent emerges from his or her own experiences. The challenge is the translation of the intent into the socially constructed terms of language. This challenge is also involved in the lie, of course, but the major difficulty resides somewhere else: the speaker has to artificially imagine an experiential intent in his or her mind which, from his or her experiential point of view, is counter-factual. The speaker has to imagine a world different from the one he or she actually experiences. All linguistic communication requires imagining for understanding. The first liars found the ways to imagine for speaking. They were probably good listeners as well: lying requires a good understanding of the victim’s experiential world. And they found new ways to control and suppress their emotions, and prevent their systems of presentational communication from betraying their intentions: good liars have poker faces. This is exactly what the apes simply cannot do. All this could not have been easy. Patterns of variability among liars began to emerge: some were more imaginative than others, more controlled, more convincing, more cunning, quicker on their feet. The better they were, the more they managed to freeload.

The victims of the liars, those lied to, must have made as variable a group as the liars. Many of them may have never understood what was happening: their skills were not good enough for the detection of lies. They were easy prey. The liars lied and increased their share of the gains at the expense of their victims, and the sense of trust on which language was founded remained intact. As long as the lie was not exposed, the problem of instability never arose. Gradually, a new relationship (a very special relationship) came to be formed between two groups: the best liars and their most devoted believers. The division of labor was clear: the liars described the world to their victims, turned their attention to certain experiences and away from others, invented collective lies, and constructed the symbolic landscape to suit their goals. When those lied to began to look at the world through the perspective spoken to them by the liars—precisely where language took them to places they had no experience with—language turned into the most effective tool of social coercion that ever was. It still is.

Not all those lied to, however, were easy prey. Some of them may have been more experienced or more suspicious, better speakers and listeners, better readers of presentational communication, or simply smarter. Many of them must have been liars themselves—liars also lie to each other. They began to develop different types of defense strategies—including those discussed in the literature—and different individuals probably began to apply them to different degrees and in different ways. One defense strategy was probably a retreat into the stronghold of the safest, most intimate social bonds. The lie began to re-arrange societies along new lines of suspicion. Secrecy was another strategy. Speaking the truth became a moral issue.

At the same time, and as significantly, however, some individuals probably began to develop new ways of lie detection. A more sensitive understanding of speakers and the relationships between what they said and how they behaved allowed for the more efficient detection of liars. Better memories helped listeners keep track of what speakers were saying, for a longer time, and begin to compare. New means gradually developed to critically judge the relationship between the message and the world. Certain questions came up for the very first time: is this reasonable? Does it make sense? Could it be? These contributed to the development of language-based epistemology just as much as honest, co-operative communication.

From a certain point on, then, a full-fledged arms race was launched between the liars, with their unique capacities, and the lie detectors and decipherers, with their own sets of skills. The liars were forced to work harder, sophisticate their techniques, develop those linguistic behaviors that allowed them to convince: this was the origin of rhetorics. Those lied-to were also forced to work harder too, on all fronts: among other things, this was the origin of logical investigation. Where the liars were strong enough, and especially where they learned to lie together, the levels of stability required for language were actually achieved by the lie, in its collective form (Knight 1998). In other places, the levels of stability required for language were maintained and fractured, strengthened and betrayed, again and again, in a constant battle. Freeloading was never controlled. The lie has always been a key determining factor in the web of evolutionary relationships between languages, their speakers, and their societies. A language that would really evolve only for honest communication would probably be much simpler, require much less from its speakers, and change society to a much less dramatic degree.

10.8 Squeezing Through the Bottleneck

This, then, is how my theory squeezes through the bottleneck of evolution. Nothing is required beyond what we already seem to know about pre-linguistic societies and their members, and what the theory of evo-devo tells us about the dynamics of evolution—no additional stipulations at the social, cultural, behavioral, communicative, cognitive, or genetic level.

The process has a very long pre-history, in which hominin communities gradually re-invented themselves on the basis of the collective activity of experiential mutual-identification. This is why the apes do not have language: they do not mutually identify. The specific function of language was invented in explorations into a new realm of communication, attempts to use the uniquely human tools of experiential-mimetic communication for something completely new—when the collective demands for information sharing began to exceed the collective capacities of experiential-mimetic communication.

The system was then pushed forward by the constant need to raise the levels of instructive success. The stabilization of the instructive strategy, and the fact that it opened totally new horizons for human societies, dictated a constant flow of innovative changes and developments, in the properties of the old tools themselves, in the communicative environment, and in the cognitive and emotional lives of individuals. Some of the changes, most importantly the emergence of concatenation, paved the way toward technological revolutions, which in their turn dictated entire sets of new dynamics on all fronts. Technological problems that appeared on the way required mutually identified solutions, and drove the development of sets of normative rules for the regulation of instructive communication. Language emerged from the outside in, like a bridge constructed simultaneously from both ends of the experiential gap: it began with the first attempts to connect experiential meaning and experiential-mimetic behavior for the function of instruction; gradually isolated the symbolic landscape from experiential meaning and phonetics from experiential-mimetic communication; and then gradually developed semantic and phonological complexity, morphology, and linear syntax. The entire process was thus characterized by high levels of developmental determinism: if we agree to position the instruction of imagination at the center of the story—with its unprecedented benefits—we find that much of the way languages are today, much of the way we are today, was already there, as potential, at the moment of origin.

Throughout the process, speakers were selected for their general ability to work with the technology: our species, and maybe our sisters and cousins too, emerged with unique adaptations to language. The adaptations, however, are not foundational to language, and they are not universal. They emerged for a technology that was already there, and all along the way they were unevenly spread across populations. The fact that almost all of us, modern humans, are capable of acquiring and using our full-fledged languages does not imply that we are less different from each other than our ancestors were. We have all climbed the ladder of language together—generation after generation of variable capacities. All along the way, certain capacities spread across entire populations, and some of them were also partially genetically assimilated, and in this sense, variability was indeed reduced. But at the very same time, languages kept evolving, new cognitive challenges emerged—and new patterns of variability were exposed. The best example of this is literacy (Jablonka and Rechav 1996): we have not adapted ourselves genetically to the activities of reading and writing (there has not been enough time and the selective pressure has not been there either), but literacy nevertheless exposed a complex pattern of variability, some of which seems to be partially genetically determined—from the quickest and most efficient readers and writers, all the way to individuals with literacy-related “disorders” such as dyslexia.

6 Comments

Join the discussion

6 Comments

  1. Bryan Atkins says:

    Veritable Wowness.
    Loved it; learned muchly.
    Extremely clear, explanatory writing / thinking.
    Thank you.
    ”Behavioral innovation produces cognitive novelty.”
    ”Innateness is a posteriori, not a priori.” Great stuff.
    “Genetic accommodation”, had never heard of that and much more.
    Reminds me a bit of this from Schrodinger’s “What Is Life” : “Behavior and physique merge into one…. You cannot have efficient wings without attempting to fly.”

    Re language in a coding context, here’s an idea I stumbled upon re Code. (Not saying it’s original; and yes, it’s speculative.)

    “The story of human intelligence starts with a universe that is capable of encoding information.” — Ray Kurzweil — “How To Create A Mind”
    Think code is physics generated, physics efficacious Relationship Infrastructure in bio, cultural & tech networks: genetic, language, math, moral, religious, legal, monetary, etiquette, software, etc.
    Code prescribes, with varying degrees of specificity, relationship interface.
    As complexity increases, new codes are invented, generated, as cited above, by the need to process that complex information, to distill it.

    Coding structures are apps for processing information.
    For example, in the transition from hunter-gatherer social structures to the exponentially more complex information architecture of cities, we added legal, monetary, alphabet and etiquette coding structures to our cultural genome.
    These new coding structures helped us process all the new relationships generated by living in cities.
    And per increasing complexity, we’ve invented software code. (Exaptation? Think so.)
    “Software is distilled complexity . . .” Charles Simonyi

  2. Prof. Emeritus Ferrel Christensen says:

    A very nice discussion, but a couple of things bother me. The process hypothesized seems very uniformitarian, yet two crucial steps involve big differences in kind: from mere sensation to memory, and then to imagination–the latter of which the author recognizes as crucial to language. (From imagining a possible event or situation and asking whether it is actual we went to utterances regarding which we could ask whether they are true.) Problem-solving creatures from apes to elephants and some birds evidently have imagination–and how much does it differ from ours? Which leads me to my second cavil, which is that he may be selling other creatures short. There are still those stories of apes taught sign-language, including the chimp Lucy–who not only lied but allegedly showed remorse of some kind when caught, saying “Sorry Lucy”.

  3. Prof. Emeritus Ferrel Christensen says:

    “…and WONDERING whether it is actual”–I meant to say ‘wondering’. 🙂

  4. JoseAngel says:

    A fascinanting, intriguing and well-argued account. I wonder, however, as human cognition is such an intrincate issue, whether there are still some missing pieces, such as an increased mental power for conceptualization and the mental construction of symbols, which of course may be strongly reinforced by language and coevolve with it, but might very well have a distinct evolutionary origin and contribute in its own right to the tipping point of linguistic emergence. Also, as regards social identities, many other social animals surely do recognize the identity of other members in the group, and this dimension of sociality needs to be further studied, both in itself and regarding its contribution to linguistic interaction. Anyway, warmest congratulations, and I am glad to see a well-argued critique of Chomskian mysterious non-evolutionary biolinguistics. Of which more here, for readers of Spanish (there’s also a lecture by Chomsky, in English, on these matters): link to vanityfea.blogspot.com.es

  5. Rory Short says:

    I am a retired information systems professional. I think it would be useful to start thinking about the evolution of language by starting at a biochemical level.

    I surmise that the living brain contains a plethora of biochemical conditions and that there is some mechanism within the brain that is able to distinguish at least some of these biochemical conditions from every other condition. Let us label the distinguishing mechanism as consciousness.

    It seems to me that, the brain evolved to, improve the brain possessing organism’s ability to negotiate through the world in its search for food and consequently its chances of survival long enough to reproduce. Up until today as far you, the reader, and I are concerned this has been a successful strategy.

    How does the brain assist in negotiating through the world? It does so by means of modelling the world. This means that it can envsisage itself, in theory, in relation to the outside world and decide whether carrying out in reality, what, was envisaged in thinking only, would be beneficial to it or not.

    Now the brain, in our case, is working with thoughts about the outside world, not the things themselves, so if the thoughts are going to be of any practical use the representations[thoughts] must be of real things. The biochemical conditions, i.e. representations/thoughts, must have some link between them and the things that they are representing. Evolution has hit on us feeling comfortable when a representation is confirmed by one or more of our senses and uncomfortable when it isn’t. There are six senses, five external ones, sight, sound, touch, taste, smell, and an internal one, an experience of something greater than ourselves that is experienced when we are wide awake but our brains are fully at rest.

    So evolution has hit on this way of verifying the truth of a thought/representation by linking thoughts to sensations. Language is comprised of words, i.e. thoughts, some of which are naturally directly linked to a sensation, like tree for example, and others of which cannot be so linked, like atom for example. For thoughts which cannot be directly linked to a sensation it should, through logical analysis, be possible to find a link to a thought which is verifiable through one or more of our senses. If this is not possible the thought must be a delusion.

  6. Joseph says:

    Excellent article, the evolution of language is a fascinating topic. Thank you for publishing this.
    It’s a crying shame the full book is $60+.