An open access publication of the American Academy of Arts & Sciences
Summer 2007

on inventing language

Author
Susan J. Goldin-Meadow
View PDF

Susan Goldin-Meadow, a Fellow of the American Academy since 2005, is Bearsdley Ruml Distinguished Service Professor at the University of Chicago. Her books, “The Resilience of Language” (2003) and “Hearing Gesture: How Our Hands Help Us Think” (2003), explore what we can learn about the human mind from looking at our hands. She also coedited “Language in Mind: Advances in the Study of Language and Thought” (with Dedre Gentner, 2003).

There is no human group, no matter how remote, that does not have language–and no nonhuman group that does. By language I mean a combinatorial system of symbols with structure at more than one level (sentence, word, morpheme, etc.), used not only to make things happen but also to share thoughts about the present and the nonpresent. Many nonhuman animals have signaling systems to attract mates, locate food, and warn each other about predators, but they cannot combine these signals hierarchically to create new, meaningful communications in any other context.

Not only are nonhuman groups unable to invent a communication system like human language spontaneously, but, despite arduous attempts, they cannot be taught one either (even a potentially accessible one, like a system produced with the hands). Chimpanzees and bonobos, our closest primate relatives, are able to learn the words of the system but not the underlying or surface structures that organize those words. Moreover, they use those words only to make requests (of humans), and not to make comments about the world around them. In contrast, when exposed to a language, human children acquire that language without any explicit instruction at all. Indeed, human children are arguably the best language learners around, arriving at more complex and complete linguistic systems than do older learners.

But can human children invent language? Language was clearly invented at some point in the past and then transmitted from generation to generation. Was it a one-time invention, requiring just the right assembly of factors, or is language so central to being human that it can be invented anew by each generation? This is a question that seems impossible to answer–today’s children do not typically have the opportunity to invent a language, as they are all exposed from birth to the language of their community. The only way to address the question is to find children who have not been exposed to a human language.

There are tales, perhaps apocryphal, of human children being raised by animals, who would, of course, not provide them with human language. Under such circumstances, children do not invent language. Even children raised by inhumane parents, who deprive their children of input from language, do not invent language. But it is hard to imagine why a child living under such inhospitable circumstances would–at a minimum, there is no one with whom to use the language.

It turns out, however, that there are children, raised by caring parents, who are unable to take advantage of the language to which they are exposed. These children are congenitally deaf, with hearing losses so severe that they cannot acquire the spoken language that surrounds them, even with intensive instruction. Moreover, they are born to hearing parents who do not know a sign language and have not placed their children in a situation where they would be exposed to one. These children lack an accessible model for human language. Do they invent one?

My colleagues and I have been studying children in these circumstances for thirty years. When we began, it was common for hearing parents to send their deaf children to oral schools. But despite the schools’ best efforts, many profoundly deaf children were unable to acquire spoken language (this was many years before cochlear implants came on the scene). The children we studied had made little progress in English and had not been exposed to either American Sign Language or any form of Signed English.

We found that the children were able to communicate with the hearing individuals in their worlds, and used gesture to do so. This is hardly noteworthy since all hearing speakers gesture when they talk. The surprising result was that the deaf children’s gestures did not look like the gestures their hearing parents produced. Their gestures had language-like structure; the parents’ gestures did not.

The children combined gestures, which were themselves composed of parts (akin to morphemes in conventional sign languages), into sentence-like strings that were structured with grammatical rules for deletion and order. For example, to ask me to share a snack, one child pointed at the snack, gestured eat (a quick jab of an O-shaped hand at his mouth), and then pointed at me. He typically placed gestures for the object of an action before gestures for the action, and gestures for the agent of an action after.

Moreover, the children’s gesture systems were generative: the children combined gestures conveying several propositions within the bounds of a single gesture sentence. For example, one child produced several propositions about snow shovels within a single (albeit run-on) sentence: that they are used to dig, that they are used when boots are worn, that they are used outside and kept downstairs. The gesture systems had parts of speech (nouns, verbs, adjectives). They were also used to make generic statements (as in the snow shovel example) and to tell stories about the past, the present, the future, and the hypothetical. The children even used their gestures to talk to themselves and about their own gestures.

In contrast, the children’s hearing parents used their gestures as all speakers do. Their sloppily formed gestures were synchronized with speech and rarely combined with one another. The gestures speakers produce are meaningful, but they convey their meanings holistically, with no componential parts and no hierarchical structure.

The striking finding is not that the deaf children communicate with their gestures. It’s that the gestures are structured in language-like ways, while their parents’ gestures are not. Indeed, their gestures are sufficiently language-like that they have been called home signs. The children could have used mime to communicate–for example, miming eating a snack to invite me to join the activity. But they did not. They produced discrete, well-formed gestures that looked more like beads on a string than a continuous unsegmentable ribbon of movement. Segmentation and combination are at the heart of human language, and they formed the foundation of the deaf children’s gesture systems. But segmentation and combination were not modeled for the children in their parents’ gestures. The children had spontaneously imposed this organization on their communications.

While the deaf children created the rudiments of language without a model to guide them, they did not formulate a full-blown linguistic system–perhaps for good reason. Their parents wanted them to learn to talk and thus did not share the children’s gesture systems with them. As a result, the children’s systems were one-sided: they produced language-like gestures to their parents, but received nonlinguistic co-speech gestures in return.

What would happen if such a child were given a partner with whom to develop language? Just such a situation arose in the 1980s in Nicaragua when deaf children were brought together in a group for the very first time. The deaf children had been born to hearing parents and, like the deaf children I have described, presumably had invented gesture systems in their individual homes. When they were brought together, they developed a common sign language, which has come to be called Nicaraguan Sign Language (NSL). The distance between the home signs invented by individual children without a partner and the sign system created by this first cohort of NSL can tell us which linguistic properties require a shared community in order to be introduced into human language.

But Nicaraguan Sign Language has not stopped growing. Every year, new deaf children enter the group and learn to sign among their peers. A second cohort of signers had as its input the sign system developed by the first cohort. Interestingly, the second-cohort signers continued to adapt the system so that the product became even more language-like. The properties of language that cropped up in the second and subsequent cohorts are properties that depend on passing the system through fresh minds–linguistic properties that must be transmitted from one ‘generation’ to the next in order to be introduced into human language.

NSL is not unique among sign languages–it is likely that all sign languages (including American Sign Language) came about through a similar process. Another recent example, a deaf community now in its seventh generation and containing 3,500 members, was founded two hundred years ago by the Al-Sayyid Bedouins. Within the last three generations, 150 deaf individuals were born into this community, all descended from two of the founders’ five sons. Al-Sayyid Bedouin Sign Language (ABSL) was thus born. ABSL differs from NSL in that it is developing in a socially stable community, with children learning the system from their parents. The signers from each of the three generations are likely to differ, and to differ systematically, in the system of signs they employ. By observing signers from each generation, we can therefore make good guesses as to when a particular linguistic property first entered the language.

Furthermore, because the individual families in the community are tightly knit, with strong bonds within families but not across them, we can chart changes in the language in relation to the social network of the community. For example, some linguistic properties remain within a single family; others spread throughout the community. Is there a systematic difference between properties that do and do not spread? In addition, because we know who talks to whom, we may be able to determine who was responsible for spreading a particular property (the men in the community? the women? the adolescents? a socially dominant family?). This small and self-contained community consequently offers a singular perspective on some classic questions in historical linguistics.

A priori we might have expected sign languages to be structured differently from spoken languages. After all, sign languages are processed by eye and hand, whereas spoken languages are processed by ear and mouth. But, in many ways, the languages are not different. Sign languages all over the world are characterized by the same hierarchy of linguistic structures (syntax, morphology, phonology), and thus draw on the same human abilities as spoken languages. Furthermore, children exposed to sign language from birth acquire that language as naturally as hearing children acquire the spoken language to which they are exposed, achieving major milestones at approximately the same ages.

However, the manual modality makes sign languages unique in at least one respect. It makes it easy to invent representational forms that can be immediately understood by naïve observers (e.g., indexical pointing gestures, iconic gestures). As a result, as we have seen here, sign languages can be created anew by individuals and groups, and thus offer us a unique opportunity to glimpse language in its infant stages and watch it grow.

Homemade sign systems also allow us to address questions about the relation between language and thought. Languages around the globe classify experience in different ways. Benjamin Whorf, following Edward Sapir, first popularized the notion that linguistic classifications might influence not only how people talk but also how they think. More specifically, Whorf suggested that the required use of a particular linguistic categorization might, at some point, also affect how speakers categorize the world even when they are not talking.

This provocative hypothesis is most often explored by comparing the nonlinguistic performance of speakers whose languages differ systematically in the way they categorize experience. But deaf children who have had no exposure to a conventional language and invent their own are also relevant to the hypothesis. Their thoughts cannot possibly have been shaped by a conventional language. Therefore, the conceptual categories the children do express in their invented languages must reveal thoughts that do not depend on conventional language. And the categories that the deaf children do not introduce into their homemade languages have the potential to reflect thoughts that do depend on language. If, for example, a deaf child does not invent gestures for the spatial relations top, middle, bottom, will that child have more difficulty solving a task that depends on these relations than will a child whose language provides her with linguistic terms for the relations?

Whatever the answers to these questions, it is clear that language is not a fragile ability in humans. It is handed down from generation to generation, but it need not be. Each new generation of human children has the potential to invent language. The language we learn is thus influenced not only by the language around us, but also by the language within us.