The Occasion of the 'Creative Singularity'
Thoughts on the semantics "artificial" "intelligence," and how we might use the moment to build a future we actually want
Earlier this month I published a piece on what I call the Creative Singularity, the moment of upheaval in the basic norms of creative work—and the broader context for creativity—we’re witnessing through the advent of publicly accessible generative engines. I also published some B-sides here on Reality Studies for a deeper dive, and shared more thoughts in an interview with Hypebeast.
I realized I wanted to elaborate on why I chose that term, and examine the role that language, metaphor, and narrative play in how we culturally metabolize emerging technologies under Postreality
Artificial? Intelligence?
Starting at the start: I have problems with the term “artificial intelligence.” I’m not alone in this; some were recently outlined by Jaron Lanier. In brief: using “AI” mythologizes technologies that alter and augment social collaboration, casting them as mysterious, unknowable entities like gods and creatures. Another has to do with “artificial”—who are we to designate any other intelligence artificial? We know so little about our own intelligence—and even in what we do can’t settle on a consensus definition. Either all intelligences are artificial or none are, but either way it makes no sense to use this term to describe machine intelligence.
I also have problems with the term “artificial general intelligence,” and the entire evolutionary framing of artificial narrow to general to superintelligence. I don’t believe humanlike “general intelligence” is or should be the primary metric by which we gauge machine intelligence. As a caveat: I don’t say this as a knock on the intentions of people who use this framework—for myself and many others, resources like the two-part Wait But Why series were once helpful orientations to the subject. But we’ve reached a point of saturation, both of the technologies themselves in our lives as well as the expanded discourse (the takes, oh the takes) where it’s no longer helpful, and I think in many ways quite counterproductive.
Perhaps I’ll be proven wrong on this, but I see the reality of convergent machine intelligence(s) as far stranger and more elusive than those bounds allow. So-called “narrow intelligences” can already do things far beyond the capability of any human being, and even GPT-4, a multimodal model so advanced it purportedly shows “sparks” of AGI, still hallucinates all kinds of plausible-sounding bullshit. With machine intelligence, we’re talking about qualitatively different forms of intelligence—and human beings propose new architectures every day, any of which might reroute the course of machine learning. The network architecture behind all of OpenAI’s GPTs, the transformer, for example, was only formalized in 2017.
In complex systems—digital, biological, or otherwise—emergence is the phenomenon that occurs when properties or behaviors emerge only through the interaction of the parts, though none of the parts had these characteristics on their own. Emergence is already occurring through generative models, and the scope will increase as we deploy more. Our language and metaphors should reflect that through flexible, deductive approaches.
Most of all I have problems with how the notion of the Singularity, promoted foremost by the TESCREAL folks, has infiltrated machine intelligence discourse. The “Book of Revelations: Machine Edition” tone among adherents promotes narrow (and often reflexively colonial) understandings of what machine intelligence is and can be, and worse leaves newcomers to the topic feeling overwhelmed and powerless to participate in shaping the development of these systems.
There are many urgent problems associated with the rise of publicly available “generative AI” tools, but at present they have to do with the ways these tools will be used by humans to accelerate the existing problems and inequities late capitalism, consolidating wealth and power in the hands of the few at the expense of the many. Hyperbolic scenario-casting doesn’t help us navigate the murky waters toward solutions, which at the moment involve social organizing and advocating for sensible regulation.