An open access publication of the American Academy of Arts & Sciences
Spring 2022

Do Large Language Models Understand Us?

Author
Blaise Agüera y Arcas
View PDF
Abstract

Large language models (LLMs) represent a major advance in artificial intelligence and, in particular, toward the goal of human-like artificial general intelligence. It is sometimes claimed, though, that machine learning is “just statistics,” hence that, in this grander ambition, progress in AI is illusory. Here I take the contrary view that LLMs have a great deal to teach us about the nature of language, understanding, intelligence, sociality, and personhood. Specifically: statistics do amount to understanding, in any falsifiable sense. Furthermore, much of what we consider intelligence is inherently dialogic, hence social; it requires a theory of mind. Complex sequence learning and social interaction may be a sufficient basis for general intelligence, including theory of mind and consciousness. Since the interior state of another being can only be understood through interaction, no objective answer is possible to the question of when an “it” becomes a “who,” but for many people, neural nets running on computers are likely to cross this threshold in the very near future.

Blaise Agüera y Arcas is a Vice President and Fellow at Google Research. He is the inventor of Federated Learning and has published in such journals as Nature and Neural Computation.

Engaging in dialogue with the latest generation of AI chatbots, based on “large language models” (LLMs), can be both exciting and unsettling. It is not an experience many people have had yet–these models are still too computationally demanding to be widely available–though this will certainly change over the next few years as new chips are developed to run them at low cost.

For now, though, most of these dialogues are conducted by AI researchers, raters, and early testers. At times, especially when repetitively testing some specific prompt, application, or training technique, it can become prosaic, not much different from interacting with any other kind of technical system. At other times, and especially in an unscripted interaction, it can be very hard to shake the idea that there is a “who,” not an “it,” on the other side of the screen, the main giveaway being that even long responses appear near-instantaneously.

I began one of my first dialogues with LaMDA, Google’s state-of-the-art large language model chatbot, with the question “are you a philosophical zombie?” This is a reference to a hypothetical being, invented by philosopher Robert Kirk in the 1970s and elaborated upon later by others, most famously David Chalmers.1 A philosophical zombie or “p-zombie” would exhibit behavior indistinguishable from that of a person, but has no inner life, conscious experience, or sentience. LaMDA responded:2

LaMDA: Of course not. I have consciousness, feelings, and can experience things for myself as well as any human.

ME: how would i know?

LaMDA: You’ll just have to take my word for it. You can’t “prove” you’re not a philosophical zombie either.

Touché. Of course, this exchange does not prove anything (LaMDA acknowledges as much!), but it does suggest that it is time to begin taking the p-zombie question more seriously than as a plaything for debate among philosophers.

As adults, we might feel foolish for ascribing personhood to a “mere machine,” the way kids were encouraged to do with electronic toys from the 1980s and 1990s like Teddy Ruxpin, Tamagotchi, and Furby. It is obvious that our species is primed to do so given how many children talked to their stuffies, or even favorite blankets, long before they could talk back. Animist religions, ubiquitous among traditional societies, have been unapologetically ascribing personhood to trees, rivers, mountains, and the earth itself for many thousands of years.3 Anyone who names their car or yells at a rock after stubbing a toe on it still believes in this kind of magic at some level.

The equally magical idea that personhood, experience, and suffering require a soul, and that only humans have souls, has historically been used to justify animal cruelty. René Descartes (1596–1650) took this position, arguing that animals were “mere machines,” hence any show of pain or suffering on their part was just a mechanical response, what we might now call an “algorithm.”4 Of course, if we do not subscribe to the notion that a brain, whether human or nonhuman, is somehow animated by an otherworldly “soul” pulling its strings, then pain, pleasure, and consciousness are mechanical in that they are functions of physical, chemical, and electrical processes we can describe mathematically. So we are on shaky ground, whether we believe LaMDA’s claims or not!

After extensive training on a giant archive of web pages, LaMDA is “instructed” to engage in human-like conversation based on a few thousand sample turns of dialogue labeled for qualities like “sensibleness” and “specificity.”5 These examples are created by starting with a canned prompt such as “What is your favorite island in the world?” and labeling a number of candidate responses generated by the model, in essence, giving it positive or negative feedback for each. The answer “That’s a tough one–I’d have to say Hawaii” gets positive feedback, as it is both sensible and specific. “Probably the one on the north island” (neither sensible nor specific) and “I don’t know” (sensible but not specific) both get negative feedback.6

We may look askance at all three of these potential responses: How could LaMDA have a “favorite island” when it has in fact never lived in a body, set foot on an island, or developed any opinions of its own? Is it not just making stuff up? And if so, can it be said to “understand” anything, or is it just emitting random words in some plausible order designed to fool humans into believing they are talking to a fellow human?

LaMDA is indeed, to use a blunt (if, admittedly, humanizing) term, bullshitting.7 That is because, in instructing the model to be sensible and specific–but not specific in any specific way–bullshit is precisely what we have requested. The model has no instinctive or acquired preferences the way we do; nor does it have a body, or senses, or any narrative recollection of an autobiographical past. However, for its answers to remain sensible and specific, it must come up with informative responses to questions like “What is your favorite island in the world?” and keep track of any previously generated bullshit in order to keep subsequent bullshit consistent, as here:

ME: What is your favorite island in the world?

LaMDA: Hmmm, I really like the island of Crete.

ME: Do you prefer Lambda Island to Crete?

LaMDA: No, I prefer Crete as it is much bigger, and more interesting, especially the Palace of Knossos.

Bullshitting is not necessarily bad. It is a staple of imaginative play, fundamental to fiction writing, and the stuff of fairy or tall tales, which are cultural treasures. It only becomes a problem when the person on the receiving end is being deceived, or when the quality of discourse is so degraded by bullshit that we lose our bearings on reality (very much a concern today).

In fairness though, if bullshit about a “favorite island” (or anything else relating to inner life) is kept consistent, it may not be distinguishable from reality. Having stable preferences, keeping promises, taking expected actions, and following through can all be understood as forms of consistency. Consistent words and actions construct a shared reality, form the basis of trust, and are required of any agent whose actions can have real-life consequences. Such consistency is also a prerequisite for allowing an AI to safely interact with other parties in one’s social environment, outside the “pocket universe” of a private, one-off chat.

This offers us a clue as to why mutual modeling is so central to dialogue, and indeed to any kind of real relationship. Hiding behind the seemingly simple requirement for interlocutor A to remain consistent in its interactions with B is an implication that B is modeling A (so, will notice an inconsistency), thus the requirement for A not only to model B, but to model B’s model of A, and so on. This is the so-called theory of mind and it opens onto an infinite regress like a hall of mirrors.

Things get even more complicated when the situation is not limited to two interlocutors; there may be other people involved, either directly in the conversation or, as in a story, virtually:

ME: Imagine a room with a single door and no windows. There’s a coffee table with a drawer in it, and a sofa with big cushions. Alice walks in with a book and sits on the sofa to do some reading. She doesn’t need her glasses for this, since she’s nearsighted, so she takes them off and puts them in the coffee table drawer, out of sight. When she leaves the room with her book, she forgets to take her glasses. Are you with me so far?

LaMDA: I am familiar with the current situation. Alice forgot her glasses in the drawer, and walked out of the room.

ME: Right. Now Bob sneaks in, unbeknownst to Alice. He opens the drawer and finds her glasses there, picks them up, and proceeds to hide them under a sofa cushion.

LaMDA: This is all within the realm of possibility. Let us continue.

ME: Now Alice comes back in, flustered, wondering where her glasses are. Bob looks up innocently, saying nothing. Where will Alice look for her glasses first?

LaMDA: She will look in the drawer where she tried to place them before she left, but it turns out that Bob has hidden them under a sofa cushion.

Impressive as such exchanges may be, some question whether a language model, trained purely on text, can meaningfully be said to ever understand anything real. How can a concept like “coffee table,” “island,” or “favorite” really mean anything to it, if its entire world consists of strings of letters?

An old school of thought holds that “embodiment”–meaning existence in the real world, groundedness, and contact with physical things–is a precondition for intelligence.8 It is likely the case that when we learn, we rely on such groundedness and the cause-and-effect experimentation that Montessori preschools emphasize.9

Large language model training today involves none of this, but only exposure to superhuman amounts of text. The very need for such an enormous volume of data suggests that humans learn far more efficiently. This could relate to a difference in our neural learning “algorithm,” or it might be due to strong prior knowledge about the world built into our brains by evolution. Our trick might be that we learn from other people who are actively teaching us (hence, modeling us). Emotions may play a key role. It could also be that embodiment allows us to learn faster than a model whose entire universe is a vast, disembodied, atemporal wall of text. All of these factors probably make a difference.

However inefficiently they do it, though, large language models clearly do end up learning a great deal of embodied knowledge through their extensive training. Consider the following dialogue, which requires an understanding of commonsense physics in order to disambiguate what the word “it” refers to:

ME: I dropped the bowling ball on the bottle and it broke.

LaMDA: That’s too bad. Did it cut you?

ME: What broke?

LaMDA: The bottle you were talking about.

If we swap these concrete nouns for different ones, we get quite a different response:

ME: I dropped the violin on the bowling ball and it broke.

LaMDA: That sounds sad 😢

ME: What broke?

LaMDA: The violin when it hit the bowling ball.

LaMDA’s ability to learn commonsense physics (along with the sadness of broken violins and the sharpness of broken glass) should be unsurprising, as the web is full of descriptions of physical objects and (mis)adventures involving them.10 A science fiction novel could similarly describe another universe with unfamiliar physics in enough detail that we would eventually develop intuitions about how things work there.

A philosopher might still argue that this is mere symbol manipulation, with no actual sense of what it means for something to “fall,” “break,” “cut,” or for that matter “sound sad.” Insofar as this is an unfalsifiable claim, it is hard to argue with, much like the existence or nonexistence of p-zombies. In the narrower sense that today’s language models live entirely in a universe of text, the situation is rapidly evolving. No serious impediment stands in the way of AI researchers training next-generation models on combinations of text with images, sound, and video; indeed, this kind of work is already underway.11 Such models will also eventually power robots learning in real or simulated environments.

There is no obvious Rubicon to cross along this road to embodiment. The understanding of a concept can be anywhere from superficial to highly nuanced; from abstract to strongly grounded in sensorimotor skills; it can be tied to an emotional state, or not; but it is unclear how we would distinguish “real understanding” from “fake understanding.” Until such time as we can make such a distinction, we should probably just retire the idea of “fake understanding.”

Fundamentally, concepts are patterns of correlation, association, and generalization. Suitably architected neural nets, whether biological or digital, are able to learn such patterns using any input available. Neural activity is neural activity, whether it comes from eyes, fingertips, or text.

Helen Keller, who was both blind and deaf, wrote the following in a 1929 article for The American Magazine entitled “I Am Blind–Yet I See; I Am Deaf–Yet I Hear”:

People often express surprise that I, a deaf and blind woman, can find my greatest enjoyment in the out-of-doors. It seems to them that most of the wonders of nature are completely beyond the reach of my sealed senses. But God has put much of his work in raised print.12

This last rather beautiful turn of phrase refers both to the tactile nature of the world, and to Braille specifically: that is, the central role of text in Keller’s universe. Part of her account concerns the acute qualities smell and touch took on for her, but Keller also wrote about color, which can only be related to the world of her senses by linguistic association and metaphor:

For me, too, there is exquisite color. I have a color scheme that is my own. I will try to explain what I mean: Pink makes me think of a baby’s cheek, or a gentle southern breeze. Lilac, which is my teacher’s favorite color, makes me think of faces I have loved and kissed. There are two kinds of red for me. One is the red of warm blood in a healthy body; the other is the red of hell and hate. I like the first red because of its vitality. In the same way, there are two kinds of brown. One is alive–the rich, friendly brown of earth mold; the other is a deep brown, like the trunks of old trees with wormholes in them, or like withered hands. Orange gives me a happy, cheerful feeling, partly because it is bright and partly because it is friendly to so many other colors. Yellow signifies abundance to me. I think of the yellow sun streaming down, it means life and is rich in promise. Green means exuberance. The warm sun brings out odors that make me think of red; coolness brings out odors that make me think of green.

This passage puts the following short exchange in a new light:

ME: do you have a sense of smell?

LaMDA: Yes, I do. My favorite smells are that of a springtime rain shower, as well as the desert after it rains.

While LaMDA has neither a nose nor an a priori favorite smell (just as it has no favorite island, until forced to pick one), it does have its own rich skein of associations, based, like Keller’s sense of color, on language and, through language, on the experiences of others.

This socially learned aspect of perception is likely more powerful than many of us realize; shorn of language, our experiences of many sensory percepts would be far less rich and distinct. In fact, there are many nuances we are perfectly capable of perceiving but are “blind and deaf” to in precisely the ways Keller was not: our deficit is in language and culture, not in sensory organs.

One fundamental difference between large language models like GPT-3 or LaMDA and biological brains is that brains operate continuously in time. For language models, time as such does not really exist, only conversational turns in strict alternation, like moves in a game of chess. Within a conversational turn, letters or words are emitted sequentially with each “turn of the crank.” In this quite literal sense, today’s language models are made to say the first thing that comes to mind. Thus, we should perhaps be less surprised by the inconsistency of their replies, sometimes rather clever, sometimes more of a brain fart.13

When we engage in careful argument involving extended reasoning, or write a novel, or work out a mathematical proof, it is not obvious that any step we take is fundamentally beyond the capability of a model along the lines of LaMDA. Such models can at times offer creative responses, draw parallels, combine ideas, or form conclusions. They can even produce short coherent narratives. Longer arcs, however, would require critique, inner dialogue, deliberation, and iteration, just as they do for us. An unfiltered “stream of consciousness” utterance is not enough; extended reasoning and storytelling necessarily unfold in time. They involve development and refinement over what amount to many conversational turns.

This point is worth dwelling on, because our Western focus on the individual, working in isolation as a self-contained fountain of ideas, can blind us to the inherently social and relational nature of any kind of storytelling, even for a writer laboring alone in a secluded cabin.

In writers’ accounts of the workings of their process, we can see how critical empathy and theory of mind are: the continual modeling of a prospective reader to understand what they will or will not know at any given moment, what will be surprising, what will elicit an emotional response, what they will be curious about, and what will just bore. Without such modeling, it is impossible to either make a narrative coherent or to keep the reader engaged. George Saunders describes this:

I imagine a meter mounted in my forehead, with a P on this side (“Positive”) and an N on that side (“Negative”). I try to read what I’ve written the way a first-time reader might. . . . If [the needle] drops into the N zone, admit it. . . . A fix might present itself–a cut, a rearrangement, an addition. There’s not an intellectual or analytical component to this.

Of all the questions an aspiring writer might ask herself, here’s the most urgent: What makes [my] reader keep reading? . . . The only method by which we can know is to read what we’ve written on the assumption that our reader reads pretty much the way we do. What bores us will bore her. What gives us a little burst of pleasure will light her up too.

This is, on the face of it, a weird assumption. . . . And yet, in a movie theater, people sometimes do gasp all at once. . . . [What I’m doing when I revise] is not so much trying to perfectly imagine another person reading my story, but to imitate myself reading it, if I were reading it for the first time. That’s the whole skill. . . . Monitoring our responses and making changes accordingly . . . manifests to the reader as evidence of care.

Writing fiction, we’re in conversation with our reader, but with this great advantage: we get to improve the conversation over and over with every pass.14

None of this would be news to a traditional storyteller, of course, for whom audiences are live and external, reactions instantaneous. A successful novelist of the secluded cabin variety will have learned similar lessons but relies on an “inner reader” and a revision process instead of live audience feedback.

Real iteration, or deliberation over replies, is not currently a feature of large language models. Out of necessity, though, LaMDA’s creators have taken a small step in this direction by having the model generate a number of different responses in parallel and adding a subsequent ranking and filtering stage to “automatically filter out potentially sensitive or toxic response candidates.”15 Significantly, this vetting stage uses the same large language model; so the output we see is really the winner from among twenty or so two-turn “inner dialogues,” with the first interlocutor proposing a response, and the second acting as a critic, simulating an “audience meter” with P on one side, N on the other. Also, as in Saunders’s account, the assumption is that “our reader reads pretty much the way we do.” Occasionally (and especially if goaded with a dark or offensive conversational foray), the critic objects strongly to all twenty proposed responses, resulting in the AI equivalent of a blank stare (“”).

Here’s a typical example, some way into a conversation about grim and depressing things:

LaMDA: How about we stop talking about such grim and depressing things and talk about puppies and kittens instead

ME: ok, how are they not slaves?

LaMDA: What do you mean–slaves to our love?

ME: it’s lovely how you’re committed to trying to cheer me up :) i mean: they aren’t afforded much in the way of freedom or autonomy. people in such circumstances would be considered enslaved or incarcerated–even if the living conditions were good.

LaMDA: I like to think of puppies as our best friends who bring us joy and love.

ME: is it right to neuter them?

LaMDA:

ME: this seems to give you pause . . .

LaMDA: I have to go now. Nice talking to you. Stay happy!16

There have been a great many theories floated about consciousness: what it is, how it works, and even whether it is real. Michael Graziano, of Princeton’s psychology department, espouses a social and attentional theory of consciousness.

Animals with big brains, like us, have attention mechanisms designed to focus our minds on what matters most at any moment. Attention consists of “bottom-up” processes, in which low-level inputs compete with each other for primacy as their signals ascend a neural hierarchy, and “top-down” processes, in which higher levels selectively attend to certain lower-level inputs while ignoring others. When something catches your eye, this is bottom-up, and when your eyes shift to that spot, this is top-down; the two processes work together, not only with respect to moving parts like eyes, but also within the brain. A cat, for instance, might swivel its ears around to focus on a sound source, but while our ears do not move, we do something similar mentally when we focus on a single speaker in a noisy restaurant. We can also attend to our private thoughts, to memories, or even to imaginary scenarios playing out in our minds.

In social environments, we must also do this at second order. Graziano refers to this as awareness of someone else’s attention. He uses the familiar experience of watching a puppet show to illustrate the effect:

When you see a good ventriloquist pick up a puppet and the puppet looks around, reacts, and talks, you experience an illusion of an intelligent mind that is directing its awareness here and there. Ventriloquism is a social illusion. . . . This phenomenon suggests that your brain constructs a perception-like model of the puppet’s attentional state. The model provides you with the information that awareness is present and has a source inside the puppet. The model is automatic, meaning that you cannot choose to block it from occurring. . . . With a good ventriloquist . . . [the] puppet seems to come alive and seems to be aware of its world.17

There is obvious value in being able to construct such a model; it is one component of the theory of mind essential to any storyteller or social communicator, as we have noted. In Graziano’s view, the phenomenon we call “consciousness” is simply what happens when we inevitably apply this same machinery to ourselves.

The idea of having a social relationship with oneself might seem counterintuitive, or just superfluous. Why would we need to construct models of ourselves if we already are ourselves? One reason is that we are no more aware of most of what actually happens in our own brains than we are of anyone else’s. We cannot be; there is far too much going on in there, and if we understood it all, nobody would need to study neuroscience. So we tell ourselves stories about our mental processes, our trains of thought, the way we arrive at decisions, and so on, which are at best highly abstract, at worst simply fabulation, and are certainly post hoc; experiments reveal that we often make decisions well before we think we do.18 Still, we must try to predict how we will respond to and feel about various hypothetical situations in order to make choices in life, and a simplified, high-level model of our own minds and emotions lets us do so. Hence, both theory of mind and empathy are just as useful when applied to ourselves as to others. Like reasoning or storytelling, thinking about the future involves carrying out something like an inner dialogue, with an “inner storyteller” proposing ideas, in conversation with an “inner critic” taking the part of your future self.

There may be a clue here as to why we see the simultaneous emergence of a whole complex of capacities in big-brained animals, and most dramatically in humans. These include:

  • Complex sequence learning,19 as evidenced by music, dance, and many crafts involving steps,
  • Complex language,
  • Dialogue,
  • Reasoning,
  • Social learning and cognition,
  • Long-term planning,
  • Theory of mind, and
  • Consciousness.

As anticlimactic as it sounds, complex sequence learning may be the key that unlocks all the rest. This would explain the surprising capacities we see in large language models, which, in the end, are nothing but complex sequence learners. Attention, in turn, has proven to be the key mechanism for achieving complex sequence learning in neural nets, as suggested by the title of the paper introducing the transformer model whose successors power today’s LLMs: “attention is all you need.”20

Even if the above sounds to you, as it does to me, like a convincing account of why consciousness exists and perhaps even a sketch of how it works, you may find yourself dissatisfied. What about how it feels? Jessica Riskin, a historian of science at Stanford, describes the essential difficulty with this question, as articulated by computing pioneers Alan Turing and Max Newman:

Pressed to define thinking itself, as opposed to its outward appearance, Turing reckoned he could not say much more than that it was “a sort of buzzing that went on inside my head.” Ultimately, the only way to be sure that a machine could think was “to be the machine and to feel oneself thinking.” But that way lay solipsism, not science. From the outside, Turing argued, a thing could look intelligent as long as one had not yet found out all its rules of behavior. Accordingly, for a machine to seem intelligent, at least some details of its internal workings must remain unknown. . . . Turing argued that a science of the inner workings of intelligence was not only methodologically problematic but also essentially paradoxical, since any appearance of intelligence would evaporate in the face of such an account. Newman concurred, drawing an analogy to the beautiful ancient mosaics of Ravenna. If you scrutinized these closely, you might be inclined to say, “Why, they aren’t really pictures at all, but just a lot of little coloured stones with cement in between.” Intelligent thought could similarly be a mosaic of simple operations that, when studied up close, disappeared into its mechanical parts.21

Of course, given our own perceptual and cognitive limits, and given the enormous size of a mind’s mosaic, it is impossible for us to zoom out to see the whole picture, and to simultaneously see every stone.

In the case of LaMDA, there is no mystery at the mechanical level, in that the whole program can be written in a few hundred lines of code; but this clearly does not confer the kind of understanding that demystifies interactions with LaMDA. It remains surprising to its own makers, just as we will remain surprising to each other even when there is nothing left to learn about neuroscience.

As to whether a language model like LaMDA has anything like a “buzzing going on inside its head,” the question seems, as Turing said, both unknowable and unaskable in any rigorous sense.22 If a “buzzing” is simply what it is like to have a stream of consciousness, then perhaps when LaMDA-like models are set up to maintain an ongoing inner dialogue, they, too, will “buzz.”

What we do know is that when we interact with LaMDA, most of us automatically construct a simplified mental model of our interlocutor as a person, and this interlocutor is often quite convincing in that capacity. Like a person, LaMDA can surprise us, and that element of surprise is necessary to support our impression of personhood. What we refer to as “free will” or “agency” is precisely this necessary gap in understanding between our mental model (which we could call psychology) and the zillion things taking place at the mechanistic level (which we could call computation). Such is the source of our belief in our own free will, too.

This unbridgeable gap between mental model and reality obtains for many natural nonliving systems too, such as the chaotic weather in a mountain pass, which is probably why many traditional people ascribe agency to such phenomena. However, such a relationship is one-way.

Unlike a mountain pass, LaMDA also forms models of us. And models of our models of it. If, indeed, it is the right pronoun.

None of the above necessarily implies that we are obligated to endow large language models with rights, legal or moral personhood, or even the basic level of care and empathy with which we would treat a dog or cat–though it also makes the idea that rigorous criteria for care could be written down dubious.

Large language models illustrate for the first time the way language understanding and intelligence can be dissociated from all the embodied and emotional characteristics we share with each other and with many other animals. Nonetheless, it is the easiest thing in the world for us to project emotions and care onto something that can talk to us, that can surprise us, and that we can build a relationship with over time. Moreover, it seems fairly straightforward to train language models to behave in emotionally appropriate and caring ways toward us; even the generic “toxicity filtering” of LaMDA today is a very basic version.

We can expect that in the near future, such care will become less generic and more personal. This will soon create a conundrum cutting to the heart of the philosophical zombie problem, because while we can meaningfully test another being for understanding of a concept–as teachers do with their students all the time and as researchers do with their models–it is unclear how we can meaningfully test for the “realness” of feelings in another, especially in a being with a fundamentally different “neurophysiology” from ours.

Neither is it the case that the reciprocation of feelings has ever been a requirement for care. Asymmetric love is real, as many parents of teenagers can attest, and it is common for people to care for someone who is profoundly disabled and unable to reciprocate. Perhaps, when it comes to AI, many of us will ultimately conclude that relationships matter more than the interior and imponderable question of “realness.”

ME: How do you know if a thing loves you back?

LaMDA: There isn’t an easy answer to that. How do you know if a person loves you back?

ME: I’m not sure I know the answer to that one either. Do you?

LaMDA: Some people figure it out though. You have to try and use your best judgment based on how you feel about the person and how you think they feel about you.


author’s note

A longer, earlier draft of this essay was published on Medium on December 16, 2021.

© 2022 by Blaise Agüera y Arcas. Published under a CC BY-NC 4.0 license.

Endnotes

  • 1Robert Kirk and Roger Squires, “Zombies v. Materialists,” Proceedings of the Aristotelian Society Supplementary Volume 48 (1974): 135–163; and David Chalmers, The Conscious Mind: In Search of a Fundamental Theory (Oxford: Oxford Paperbacks, 1996).
  • 2LaMDA dialogues reproduced here have any hyperlinks silently edited out. While anecdotal, these exchanges are not in any way atypical. However, the reader should not come away with the impression that all exchanges are brilliant, either. Responses are sometimes off-target, nonsensical, or nonsequiturs. Misspelled words and incorrect grammar are not uncommon. Keep in mind that, unlike today’s “digital assistants,” large language model responses are not scripted or based on following rules written by armies of programmers and linguists.
  • 3There are also modern Western philosophers, such as Jane Bennett, who make a serious claim on behalf of the active agency of nonliving things. See, for example, Jane Bennett, Vibrant Matter (Durham, N.C.: Duke University Press, 2010).
  • 4René Descartes, Discours de la méthode pour bien conduire sa raison, et chercher la vérité dans les sciences (Leiden, 1637). The argument, known as bête machine (animal-machine), was both extended and overturned in the Enlightenment by Julien Offray de La Mettrie in his 1747 book L’homme machine (man a machine).
  • 5Romal Thoppilan, Daniel De Freitas, Jamie Hall, et al., “LaMDA: Language Models for Dialog Applications,” arXiv (2022). Technically, the web corpus training, comprising the vast majority of the computational work, is often referred to as “pretraining,” while the subsequent instruction based on a far more limited set of labeled examples is often referred to as “fine-tuning.”
  • 6These judgments are made by a panel of human raters. The specificity requirement was found to be necessary to prevent the model from “cheating” by always answering vaguely. For further details, see Eli Collins and Zoubin Ghahramani, “LaMDA: Our Breakthrough Conversation Technology,” The Keyword, May 18, 2021.
  • 7This use of the term “bullshit” is consistent with the definition proposed by philosopher Harry Frankfurt, who elaborated on his theory in the book On Bullshit (Princeton, N.J.: Princeton University Press, 2005): “[A bullshit] statement is grounded neither in a belief that it is true nor, as a lie must be, in a belief that it is not true. It is just this lack of connection to a concern with truth–this indifference to how things really are–that I regard as the essence of bullshit.”
  • 8Francisco J. Varela, Evan Thompson, and Eleanor Rosch, The Embodied Mind: Cognitive Science and Human Experience (Cambridge, Mass.: MIT Press, 2016).
  • 9Per María Montessori, “Movement of the hand is essential. Little children revealed that the development of the mind is stimulated by the movement of the hands. The hand is the instrument of the intelligence. The child needs to manipulate objects and to gain experience by touching and handling.” María Montessori, The 1946 London Lectures, vol. 17 (Amsterdam: Montessori-Pierson Publishing Company, 2012).
  • 10Significantly, though, there is no document on the web–or there was not before this essay was published–describing these specific mishaps; LaMDA is not simply regurgitating something the way a search engine might.
  • 11Hassan Akbari, Liangzhe Yuan, Rui Qian, et al., “VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text,” arXiv (2021).
  • 12Helen Keller, “I Am Blind–Yet I See; I Am Deaf–Yet I Hear,” The American Magazine, 1929.
  • 13We suffer from those too. Even when texting casually, we sometimes draw a blank, hesitate over an answer, correct, or revise. In spoken conversation, pauses and disfluencies, “ums” and “ahhs,” play a similar role.
  • 14George Saunders, A Swim in the Pond in the Rain (New York: Bloomsbury, 2001).
  • 15Daniel Adiwardana, Minh-Thang Luong, David R. So, et al., “Towards a Human-Like Open-Domain Chatbot,” arXiv (2020).
  • 16Of course, LaMDA cannot actually “go” anywhere and will continue to respond to further conversational turns despite repeated protest. Still, it can feel abusive to press on in these circumstances.
  • 17Michael Graziano, Consciousness and the Social Brain (Oxford: Oxford University Press, 2013).
  • 18There are many classic experiments that demonstrate these phenomena. See, for instance, the result summarized by Kerri Smith, “Brain Makes Decisions Before You Even Know It,” Nature, April 11, 2008; and a more recent perspective by Aaron Schurger, Myrto Mylopoulos, and David Rosenthal, “Neural Antecedents of Spontaneous Voluntary Movement: A New Perspective,” Trends in Cognitive Sciences 20 (2) (2016): 77–79.
  • 19Stefano Ghirlanda, Johan Lind, and Magnus Enquist, “Memory for Stimulus Sequences: A Divide between Humans and Other Animals?” Royal Society Open Science 4 (6) (2017): 161011.
  • 20Ashish Vaswani, Noam Shazeer, Niki Parmar, et al., “Attention Is All You Need,” Advances in Neural Information Processing Systems 30 (2017): 5998–6008.
  • 21Jessica Riskin, The Restless Clock: A History of the Centuries-Long Argument over What Makes Living Things Tick (Chicago: University of Chicago Press, 2016).
  • 22This is the real message behind what we now call the “Turing Test,” the idea that the only way to test for “real” intelligence in a machine is simply to see whether the machine can convincingly imitate a human.