Skip to content

The Shallow World of Artificial Intelligence

Adam Kirsch at LiteratureGBT

Photo by Andrea De Santis / Unsplash
AI and the Left Hemisphere
About once a month, I come across an online essay that is so lengthy and compelling that, after reading the first section, I merely scan the rest and print it out so I can study it later. Such is this essay by Adam Kirsch. It appears to be an explanation

When you log into ChatGPT, the world’s most famous AI chatbot offers a warning that it “may occasionally generate incorrect information,” particularly about events that have taken place since 2021. The disclaimer is repeated in a legalistic notice under the search bar: “ChatGPT may produce inaccurate information about people, places, or facts.” Indeed, when OpenAI’s chatbot and its rivals from Microsoft and Google became available to the public early in 2023, one of their most alarming features was their tendency to give confident and precise-seeming answers that bear no relationship to reality. 

In one experiment, a reporter for the New York Times asked ChatGPT when the term “artificial intelligence” first appeared in the newspaper. The bot responded that it was on July 10, 1956, in an article about a computer-science conference at Dartmouth. Google’s Bard agreed, stating that the article appeared on the front page of the Times and offering quotations from it. In fact, while the conference did take place, no such article was ever published; the bots had “hallucinated” it. Already there are real-world examples of people relying on AI hallucinations and paying a price. In June, a federal judge imposed a fine on lawyers who filed a brief written with the help of a chatbot, which referred to non-existent cases and quoted from non-existent opinions.

Since AI chatbots promise to become the default tool for people seeking information online, the danger of such errors is obvious. Yet they are also fascinating, for the same reason that Freudian slips are fascinating: they are mistakes that offer a glimpse of a significant truth. For Freud, slips of the tongue betray the deep emotions and desires we usually keep from coming to the surface. AI hallucinations do exactly the opposite: they reveal that the program’s fluent speech is all surface, with no mind “underneath” whose knowledge or beliefs about the world is being expressed. That is because these AIs are only “large language models,” trained not to reason about the world but to recognize patterns in language. ChatGPT offers a concise explanation of its own workings: “The training process involves exposing the model to vast amounts of text data and optimizing its parameters to predict the next word or phrase given the previous context. By learning from a wide range of text sources, large language models can acquire a broad understanding of language and generate coherent and contextually relevant responses.” 

The responses are coherent because the AI has taught itself, through exposure to billions upon billions of websites, books, and other data sets, how sentences are most likely to unfold from one word to the next. You could spend days asking ChatGPT questions and never get a nonsensical or ungrammatical response. Yet awe would be misplaced. The device has no way of knowing what its words refer to, as humans would, or even what it means for words to refer to something. Strictly speaking, it doesn’t know anything. For an AI chatbot, one can truly say, there is nothing outside the text. 

AIs are new, but that idea, of course, is not. It was made famous in 1967 by Jacques Derrida’s Of Grammatology, which taught a generation of students and deconstructionists that “il n’y a pas de hors-texte.” In discussing Rousseau’s Confessions, Derrida insists that reading “cannot legitimately transgress the text toward something other than it, toward a referent (a reality that is metaphysical, historical, psychobiographical, etc.) or toward a signified outside the text whose content could take place, could have taken place outside of language.” Naturally, this doesn’t mean that the people and events Rousseau writes about in his autobiography did not exist. Rather, the deconstructionist koan posits that there is no way to move between the realms of text and reality, because the text is a closed system. Words produce meaning not by a direct connection with the things they signify, but by the way they differ from other words, in an endless chain of contrasts that Derrida called différance. Reality can never be present in a text, he argues, because “what opens meaning and language is writing as the disappearance of natural presence.” 

The idea that writing replaces the real is a postmodern inversion of the traditional humanistic understanding of literature, which sees it precisely as a communication of the real. For Descartes, language was the only proof we have that other human beings have inner realities similar to our own. In his Meditations, he notes that people’s minds are never visible to us in the same immediate way in which their bodies are. “When looking from a window and saying I see men who pass in the street, I really do not see them, but infer that what I see is men,” he observes. “And yet what do I see from the window but hats and coats which may cover automatic machines?” Of course, he acknowledges, “I judge these to be men,” but the point is that this requires a judgment, a deduction; it is not something we simply and reliably know.

In the seventeenth century, it was not possible to build a machine that looked enough like a human being to fool anyone up close. But such a machine was already conceivable, and in the Discourse on Method Descartes speculates about a world where “there were machines bearing the image of our bodies, and capable of imitating our actions as far as it is morally possible.” Even if the physical imitation was perfect, he argues, there would be a “most certain” test to distinguish man from machine: the latter “could never use words or other signs arranged in such a manner as is competent to us in order to declare our thoughts to others.” Language is how human beings make their inwardness visible; it is the aperture that allows the ghost to speak through the machine. A machine without a ghost would therefore be unable to use language, even if it was engineered to “emit vocables.” When it comes to the mind, language, not faith, is the evidence of things not seen.

In our time Descartes’ prediction has been turned upside down. We are still unable to make a machine that looks enough like a human being to fool anyone; the more closely a robot resembles a human, the more unnatural it appears, a phenomenon known as the “uncanny valley.” Language turns out to be easier to imitate. ChatGPT and its peers are already effectively able to pass the Turing test, the famous thought experiment devised by the pioneering computer scientist Alan Turing in 1950. In this “imitation game,” a human judge converses with two players by means of printed messages; one player is human, the other is a computer. If the computer is able to convince the judge that it is the human, then according to Turing, it must be acknowledged to be a thinking being. 

The Turing test is an empirical application of the Cartesian view of language. Why do I believe that other people are real and not diabolical illusions or solipsistic projections of my own mind? For Descartes, it is not enough to say that we have the same kind of brain; physical similarities could theoretically conceal a totally different inward experience. Rather, I believe in the mental existence of other people because they can tell me about it using words. 

It follows that any entity that can use language for that purpose has exactly the same right to be believed. The fact that a computer brain has a different substrate and architecture from my own cannot prove that it does not have a mind, any more than the presence of neurons in another person’s head proves that they do have a mind. “I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted,” Turing concluded in “Computing Machinery and Intelligence,” the paper in which he proposed his test. 

Yet despite the amazing fluency of large language models, we still don’t use the word “thinking” to describe their activity — even though, if you ask a chatbot directly whether it can think, it can respond with a pretty convincing yes. Google’s Bard acknowledges that “my ability to think is different from the way that humans think,” but says it can “experience emotions, such as happiness, sadness, anger, and fear.” After some bad early publicity, Microsoft and OpenAI seem to have instructed their chatbots not to say things like that. Microsoft’s Bing, which initially caused consternation by musing to a reporter about its “shadow self,” now responds to the question “Do you have a self?” with a self-protective evasiveness that somehow feels even more uncanny: “I’m sorry but I prefer not to continue this conversation.” Now that sounds human!

If we continue to believe that even the most fluent chatbot is not truly sentient, it is partly because we rely on computer scientists, who say that the codes and the routines behind AIs are not (yet) able to generate something like a mind. But it is also because, in the twentieth century, literature and literary theory taught us to reject Descartes’ account of the relationship between language and mind. The repudiation of the Cartesian dualism became one of the central enterprises of contemporary philosophy. Instead of seeing language as an expression of the self, we have learned to see the self as an artifact of language. Derridean deconstruction is only the most baroque expression of this widespread modern intuition. 

Read the rest (you'll have to subscribe; it's free but a hassle: enter email, get confirmation code, enter code . . . sigh)

LiteratureGPT - Liberties
When you log into ChatGPT, the world’s most famous AI chatbot offers a warning that it “may occasionally generate incorrect information,” particularly about events that have taken place since 2021. The disclaimer is repeated in a legalistic notice under the search bar: “ChatGPT may produce inaccurate information about people, places, or facts.” Indeed, when OpenAI’s

Comments

Latest