Skip to content

Autocomplete: Coming to Terms with Our New Textual Culture

By Richard Gibson at Hedgehog Review

Photo by Andrea De Santis / Unsplash

In his 1987 book Die Schrift, the Czech-born Brazilian philosopher Vilém Flusser posed the question of whether writing had a future (Hat Schreiben Zukunft? reads Flusser’s subtitle). As he surveyed the media landscape of the late twentieth century, Flusser observed that some aspects of writing (“this ordering of written signs into rows”) could already be “mechanized and automated” thanks to word processing, and he foresaw that artificial intelligence would “surely become more intelligent in the future” allowing the mechanization of writing to proceed further.

In fact, Flusser anticipated that AI would soon exhibit the hallmark cognitive traits of the mental world inaugurated by writing. Of that mental world, Flusser writes, “Only one who writes lines can think logically, calculate, criticize, pursue knowledge, philosophize.” Above all, Flusser credits writing with giving humans “historical consciousness,” which he defines as the ability to see and describe the world in terms of goal-oriented processes—as opposed to the unchanging cycles that marked prehistorical societies. AIs, in Flusser’s view, will soon “possess a historical consciousness far superior to ours,” allowing them to “make better, faster, and more varied history than we ever did,” with the result that we’ll leave the business of history-writing to them. Writing may indeed have a future, Flusser believed, but that future won’t be an entirely, or even primarily, human one.

From our contemporary vantage point, Flusser’s scenario, though alarming to him, seems more than a little idyllic. In our age of troll farms and fake news, text generators seem less like the natural inheritors of history-writing than a massive impediment to getting the record straight. We can see that AI has as much potential to wreak havoc on our writing culture as it does to offer more than human insight. Yet Flusser’s way of being wrong is illuminating because it hinges on the implicit, and not uncommon, assumption that AIs will surpass human writers in both their epistemic and stylistic capacities. In other words, Flusser assumes that since AIs will know more and write better than we can, we’ll eventually have to step aside.

A more complicated, and perhaps uncomfortable, reality now faces us, however. Thanks to the rapid improvement in deep learning techniques over the last decade, computer scientists have created text generators that are indeed capable of producing plausible sentence-, paragraph-, and article-length writing in a range of genres and in a matter of minutes, if not seconds. While these are truly magnificent achievements, the AI-writers are nowhere close to the independent, omniscient virtuosos that Flusser imagined thirty-five years ago. The most advanced are still profoundly dependent on ongoing human efforts to amass and distribute writing, as their databases are culled largely from the Web. What they write reflects what we have written and still are writing. As a result, we all must adapt to a new textual culture in which functional but far from all-knowing AIs will be active, fulfilling numerous writing tasks and thereby destabilizing old practices and routines.

Leading the pack is OpenAI’s vaunted GPT-3 (or, “Generative Pre-Trained Transformer”), a large language model (LLM) trained on 45 terabytes of materials, its sources including Wikipedia, Google Books, and Common Crawl, archival service that periodically harvests the Web. That vast data set, and OpenAI’s impressive use of neural networks, allow the GPT-3 interface to behave like the auto-complete options in your email or smartphone, in which the algorithm’s objective is to predict what should come next based on what you’ve written, though in this case on a far grander scale and with the possibility of using the conventions of a specified genre.

Read the rest

Latest