Skip to content

Rational Magic

Tara Isabella Burton at The New Atlantis

Photo by Elena Mozhvilo / Unsplash

It broke open a shell in my heart,” the young man I’ll call Vogel said of reading Nietzsche’s Human, All Too Human when we met for an interview earlier this year at a Brooklyn bar. “I was very, very depressed at that time…. Beauty in the world had become hauntingly distant. It existed over the horizon behind some mountain and I couldn’t access it.”

Vogel wears the owl of Minerva around his neck. It’s a reference to the pursuit of wisdom, but the charm also evokes Hegel’s maxim “The owl of Minerva spreads its wings only with the falling of the dusk” the idea that true insight only comes late, at the end of an era. His bracelets, too, are symbolic: they represent Huginn and Muninn, who in Norse mythology are the two ravens who sit on the shoulders of the god Odin, and whose names mean thought and memory.

Vogel is part of a loose online subculture known as the postrationalists — also known by the jokey endonym “this part of Twitter,” or TPOT. They are a group of writers, thinkers, readers, and Internet trolls alike who were once rationalists, or members of adjacent communities like the effective altruism movement, but grew disillusioned. To them, rationality culture’s technocratic focus on ameliorating the human condition through hyper-utilitarian goals — increasing the number of malaria nets in the developing world, say, or minimizing the existential risk posed by the development of unfriendly artificial intelligence — had come at the expense of taking seriously the less quantifiable elements of a well-lived human life.

On Twitter, Vogel calls himself Prince Vogelfrei and tweets a combination of subcultural in-jokes, deeply earnest meditations on the nature of spiritual reality, and ambiguous amalgamations of the two (example: “get over all social fomo by contemplating the inaccessible experience of all history and prehistory, the primordial love stories of rodent-like ancestors”). Vogelfrei in German means outlawed but is literally free as a bird — not a bad thing to be, on the often intellectually siloed birdsite. It’s also a reference to a series of poems by Nietzsche sung by a prince who, imitating birds, sets himself spiritually free.

Vogel’s pursuit of truth had hardly been painless. Raised a pastor’s son and educated in evangelical Christian homeschool circles, as a teenager he was living in Louisville, Kentucky when he experienced a crisis of faith, or what he calls a “deconversion.” It was, he told me, a “psychological shock,” even “a mystical experience.” He had “a vision of God being sacrificed on the altar of truth.” Traditional Christianity now seemed untenable to him; untenable, too, the secular world, which seemed no less full of unexamined dogma, tinged with moral and intellectual unseriousness. Unwilling to enter the standard life tracks that seemed most easily available to him — ministry, say, or conservative politics — he moved to Seattle, where he worked for a while as a janitor at the University of Washington.

He continued to seek out new avenues of intellectual and spiritual engagement. He got involved with his local rationalist community, ultimately running a rationalist reading group. He also got involved with a group devoted to the practice of Historical European martial arts: people he described as “Renaissance Faire, pagan types.” Reading Nietzsche around this time, he saw in the philosopher a model for how to bridge his intellectual and creative worlds. Nietzsche, Vogel argued, “disarms some of the reasons that intelligent people often end up very cynical by doing better than them, but still coming back around to a perspective of hope, essentially.”

Vogel’s enthusiasm for beauty, for poetry, for mythic references, for an esoteric strain of quasi-occult religious thought called Traditionalism: all of this, his onetime compatriots in the rationality community might once upon a time have dismissed as New Age claptrap. But Vogel’s personal journey from rationalism to postrationalism is part of a wider intellectual shift — in hyper-STEM-focused Silicon Valley circles and beyond — toward a new openness to the religious, the numinous, and the perilously “woo.”

You might call it the postrationalist turn: a cultural shift in both relatively “normie” and hyper-weird online spaces. Whether you call it spiritual hunger, reactionary atavism, or postliberal epistemology, more and more young, intellectually inclined, and politically heterodox thinkers (and would-be thinkers) are showing disillusionment with the contemporary faith in technocracy and personal autonomy. They see this combination as having contributed to the fundamentally alienating character of modern Western life. The chipper, distinctly liberal optimism of rationalist culture that defines so much of Silicon Valley ideology — that intelligent people, using the right epistemic tools, can think better, and save the world by doing so — is giving way, not to pessimism, exactly, but to a kind of techno-apocalypticism. We’ve run up against the limits — political, cultural, and social alike — of our civilizational progression; and something newer, weirder, maybe even a little more exciting, has to take its place. Some of what we’ve lost — a sense of wonder, say, or the transcendent — must be restored.

‘Raise the Sanity Waterline’

Aquick primer for the less-online. The rationality community got its start on a few blogs in the early 2000s. The first, Overcoming Bias, founded in 2006 and affiliated with Oxford’s Future of Humanity Institute, was initially co-written by economics professor Robin Hanson and, somewhat improbably, Eliezer Yudkowsky, a self-taught AI researcher. Yudkowsky’s chief interest was in saving the world from the existential threat posed by the inevitable development of a hostile artificial intelligence capable of wiping out humanity, and his primary medium for recruiting people to his cause was a wildly popular, nearly 700,000-word fanfiction called Harry Potter and the Methods of Rationality, in which Harry learns that the human mind is capable of far more magic than a wooden wand could ever provide.

As its name might suggest, Overcoming Bias was dedicated to figuring out all the ways in which human beings have gotten very good at lying to ourselves, whether through fear of the unknown or a desire for self-aggrandizement or just plain being really bad at math, as well as all the ways in which we might train ourselves to think better. By 2009, Yudkowsky had decamped to his own blog, LessWrong, which purported to help people be, well, just that, by hacking into our primordial predator-avoiding monkey-brains and helping them to run new neurological software, optimized for life in a complicated modern world.

Both LessWrong and the similarly-focused Slate Star Codex, founded in 2013 by a Bay Area psychiatrist writing under the pen name Scott Alexander, attracted not just passive readers but enthusiastic commenters, who were drawn to the promise of individual self-improvement as well as the potential to discuss philosophy, science, and technology with people as uncompromisingly devoted to the truth as they believed they were. These commenters — a mixture of the traditionally educated and autodidacts, generally STEM-focused and with a higher-than-average share of people who identified as being on the autism spectrum — tended to be suspicious not just of humanities as a discipline, but of all the ways in which human emotional response clouded practical judgment.

Central to the rationalist worldview was the idea that nothing — not social niceties, not fear of political incorrectness, certainly not unwarranted emotion — could, or should, get between human beings and their ability to apprehend the world as it really is. One longtime rationalist of my acquaintance described the rationalist credo to me as “truth for truth’s sake.” No topic, no matter how potentially politically incendiary, was off-limits. Truth, the rationalists generally believed, would set humanity free. Sure, that meant tolerating the odd fascist, Nazi, or neoreactionary in the LessWrong or Slate Star Codex comments section (New Right leader Curtis Yarvin, then writing as Mencius Moldbug, was among them). But free and open debate, even with people whose views you find abhorrent, was so central to the rationalist ethos that the most obvious alternative — the kinds of harm-focused safeguarding central to fostering the ostensibly “safe spaces” of the social justice left — seemed unthinkable.

The rationalist universe soon expanded beyond the blogs themselves. Members of the wider LessWrong community founded the Center for Applied Rationality in Berkeley in 2012. Its purpose was to disseminate rationalist principles more widely — or, in rationalist parlance, to “raise the sanity waterline.” They focused on big-picture, global-level issues, most notably and controversially Yudkowsky’s pet concern: the “x-risk” (“x” for existential) that we will inadvertently create unfriendly artificial intelligence that will wipe out human life altogether.

Read the rest

Comments

Latest