What Ted Chiang’s New Yorker Essay Gets Wrong About Art and AI—and Why It Matters
3 responses to the legendary writer's recent claim that AI will never make art.
Ted Chiang is one of the most important writers of our time, not only for his fiction but for his essays, recently on the subject of machine learning/artificial intelligence. “ChatGPT Is a Blurry JPEG of the Web” and “Will A.I. Become the New McKinsey?” in the New Yorker are two of the most lucid public-facing essays about the intersection of AI, creativity, and culture. I can’t recommend them enough.
Which is why Chiang’s latest essay, “Why A.I. Isn’t Going to Make Art” feels so bewildering. As with his previous essays, it deftly identifies how late capitalism perversely incentivizes AI’s potential shortcomings, amplifies harmful progress narratives, and stands to damage art and art’s role in society—but in the process it manages to misrepresent both AI and art, seeking to freeze frame and mechanize what are both in fact complex, liquid forms. I believe Chiang and I agree on the broad strokes, but disagree on what it means, what it asks of us, and ideal strategy—and what follows is offered in the spirit of dialogos.
[NOTE: this piece doesn’t recap Chiang’s piece, nor does it address every single point he makes. His piece is very worth reading; I encourage you to check it out before coming back here, though it’s not necessary to understand my arguments]
Art is Contextual
Chiang introduces his argument by defining art—which he admits is a notoriously tricky task—as “something that results from making a lot of choices.” Certainly great art involves a lot of choice, but this definition is at best a subpoint for a more comprehensive, explanatory definition. What field does not involve a lot of choices at the level of practitioners and experts? Science certainly involves a lot of choices, as do math, basketball, and cooking. Even design, often held up as the yin to art’s yang, involves a lot of choices.
An example of a definition for art I would offer is: creative expression that attempts to communicate something true, honest, and/or beautiful through specific materials, in such a way that it could not be communicated otherwise. To be clear, this can only account for one idealized version of art based in my own subjectivity, and, as will be discussed below, the human realities of what we collectively call art frustrate any attempt at a single definition.
We might overlook Chiang’s misrepresentation if it weren’t the foundation upon which he develops his argument. Outside of definitions and semantics, art is not a fixed thing. It evolves alongside humanity’s symbol- and tool-making capabilities.
Painting on canvas, for example, only came into existence in the 14th century, and the stretched canvas (the canvas as we understand it today) was only invented in the late 19th century. We think of poetry today as a written medium, but some scholars believe it was an oral medium first, predating literacy. What was expressed as poetry then is anathema to much of what is produced now, and even the modes of oral poetry that persist today are vastly different in style and function than the oral poems from thousands of years ago.
Thus, saying AI will never create art implicitly impoverishes futures of what art might be, modalities of artistic expression that do not exist today. Even carrying the assumption that generative AI will remain static and only ever produce outputs that are blander than human-produced art, there could be future milieus in which producing the blandest, most generic output is regarded as the highest expression of art. In this case we might not say that the AI model is an artist, but if, as today, art is deemed as such by a loose consensus of key stakeholders (artists, institutions, distributors, financiers, enthusiasts, public audiences, et al.), it’s art.
Artists Have Already Made Art with AI (and Have Been Doing so for Decades)
Though the stated thesis of Chiang’s argument addresses question of a model being able to autonomously produce art, his evidence relies on examples of humans using generative AI in the production of art, a critical difference. In this, he exposes himself to easy rebuttals; of course artists can make art with AI, and indeed have been since the 1960s. Rather than burden this essay with the abundance of potent examples, I’ll instead point to the exhibitions I’ve curated featuring artists doing exactly this.
A wonderful dialogue with artist and science communicator , which examines AI in connection with culture, power, mathematics, and dimensions…also the ways that coral reefs exhibit the curved structure of spacetime and more:
There’s an inevitable counterargument here that what constitutes capital-A art is rooted in subjectivity, and perhaps the artworks selected above won’t read as art to certain audiences. Capital-A art as we understand it today lives within the lineage of Modernism, epitomized by Duchamp’s readymades, ordinary objects that Duchamp “elevated” as art by the simple function of his artistic choice. However true this reality may have been before, readymades rendered it mainstream, diffusing this relationship to art and materials so thoroughly into the establishment that it’s now hard to imagine how one would even produce, discuss, and critique art absent this commitment to concept. If materials as seemingly far-fetched as a bike wheel, urinal, thread, and a wheat field can be used to create some of the most iconic works in modern art history, it almost feels silly to state that AI could as well—even including the most boring, generic large model, to say nothing of the handcrafted, bespoke models that artist-technologists might create.
Chiang’s argument that generative AI incentivizes lazy participation is legitimate, and here I want to echo his claim that AI will probably have a large-scale blandifying effect on creative communication. This is especially true as it pertains to the periods of early learning in one’s artistic practice, which are the hardest and often most thankless because aspiring artists’ tastes so outpace their capability. It is here that critical learning occurs, and it’s not hard to imagine the easy allure of generative AI undercutting this necessary, formative learning.
In an earlier essay, Chiang nails this:
“Sometimes it’s only in the process of writing that you discover your original ideas. Some might say that the output of large language models doesn’t look all that different from a human writer’s first draft, but, again, I think this is a superficial resemblance. Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say. That’s what directs you during rewriting, and that’s one of the things lacking when you start with text generated by an A.I.”
But arguing that AI will interfere with respective individuals’ artistic development is a far cry from claiming AI will never (be used to) create art. Thus, my claim is not that the diffusion of large AI models into mass culture will be a net good for art and society, but rather that future societies are fundamentally unknowable, and we betray a limiting bias when we attempt to project today’s value systems (artistic or otherwise) onto them.
Large Commercial Models Are Only One Kind of AI—and Only a Few Years Old
Generative AI as we understand it today might be a poor tool for producing art in most cases, and have knock-on deleterious effects on artist development, but claiming it will never (be used to) create art discounts what they might evolve into, as well as the prospect that AI models themselves might be art.
Chiang seems to want to make claims about AI as a monolith while limiting the scope of his argument to contemporary large commercial models. Given the outsized presence of AI in public discourse, it’s easy to forget that generative AI really only became a public discussion in 2022 through the seemingly sudden capabilities of Midjourney, DALL-E, and ultimately ChatGPT. There had been buzzy moments prior, such as the spate of “an AI wrote this article wow” pieces produced using GPT-3 in 2020, but 2022 was when “generative AI” became the category we now understand it to be.
As such, the claim that there exists no artistic potential feels near-sighted and epistemically brash, betraying a disposition about how it feels for AI to creep into everything rather than a more measured assessment of what it might mean for the futures of art, creativity, and culture. The transformer deep learning model that underpins ChatGPT and many other LLMs, was originally proposed as a translator, and its capability came as a major surprise to its inventors. Even taking the longest view to its initial proposal in 2017, we’re only in the very early phases of what it means that these particular deep learning models are the ones that exploded into the public. There are countless other existing and forthcoming types of models based on different frameworks/proposals, such as the state space model (SSM), which could be years away from being well-understood and/or able to radically alter AI’s capabilities. Thus, if the option is between predicting that AI will never create art and that some future version of AI will create art, statistics would side with the latter.
And the culture of AI is vaster than just the large commercial models. The hype of the past couple years has also set in motion breakneck startup investments, social and entrepreneurial dynamics, and academic research projects, and critically, galvanized the open source community. These too are lagging indicators—they take time to gain recognition.
In some cases it won’t be the individual outputs that will be regarded as the art, but the models themselves. This notion extends arguments that digital artists have been persuasively making regarding generative art (not to be confused with generative AI), in which they claim that it is the code that is the true art, and the outputs (i.e., individual images) are how that art is expressed.
What happens when AI capabilities (however successful they are or are not) intersect with neurotechnology, especially brain-computer interfaces?
Dipping into more fantastical speculation: imagine if the initiatives to use AI to translate animal speech succeed. What new hybrid forms of art and communication might emerge in such a moment? More importantly, what are the new dangers and possible sources of harm that this development will bring—and who do we imagine will be the ones to identify and communicate those to the public?
Why Does This Matter?
Perhaps the AI of our futures will lean more toward perfunctory tasks than artistic ones, but it’s important to acknowledge the role artists play as translators of their respective paradigms and zeitgeists. This artistic translation might not merely be for the general betterment of society, but identify key points of intervention in problematic structures, which the public may not yet perceive or understand. As Elan Mastai said, “When you invent the car, you also invent the car accident. When you invent the plane, you also invent the plane crash.”
What are the proverbial accidents and crashes latent in AI and the surrounding culture that artists might identify, communicate, and possibly intervene in?
Modern digital life involves participating with technologies that have been engineered under the incentives of late capitalism (or technofeudalism, depending on your preferences). They involve high degrees of surveillance, targeting, and psychological manipulation, and for many, they form the bedrock of daily interactions. Art is only one possible means of intervening these systems, but it’s an important one. Artists who intimately understand the material of AI will be more capable of levying critique and obstructing the project of extraction.
This is something I discuss in depth with
& Caroline Sinders in a recent episode of Urgent Futures, and I can’t recommend their wisdom enough:Thus, this isn’t an esoteric industry debate. The narratives underpinning the development of AI are potent, driving industry investment, the future of labor, and energy realities. It remains popular, especially among artists, to reject AI outright, but this entails an unfortunate resistance to developing literacy and awareness about what is going on among the very people who are possibly the most poised to effectively intervene. I fear that pieces like “Why A.I. Isn’t Going to Make Art” confirm that stance, as if to say, "Nothing to see here, folks!" Even though that disposition may prove appropriate in many cases, especially regarding the production of subpar, concept-less images that parade as art, this attitude downplays the role AI will continue to have in society, and the need for liberatory positions and prosocial interventions in the face of its ongoing, breakneck development.
It feels weird doing a piece like this, as Ted Chiang is something of a literary hero for me. His stories and essays are superlative examples of the role artists can play in shaping discourse about emerging technologies. I offer this essay in the tradition of his earlier essays, as I believe these are important considerations not just within art and AI, but for society, power, and policy. If he happens to read this and wants discuss it all on Urgent Futures, I’d welcome the opportunity. I continue to have immense respect for his overall clarity of vision in how AI will be used to flatten discourse and serve the interests of the wealthy at the expense of the many.
Yes, I see opportunities for artistic invention with AI, but I see many more opportunities for AI to be used to exacerbate the worst of modern society. In this I’m in total agreement with Chiang, but in this case we disagree on how this should be expressed. Given how vast and nuanced the possibility space of AI is, it feels critical to advocate for artistic AI literacy—among both artists and the broader public—and it seems to me that trying to prove how current-generation AI is creatively deficient sabotages that goal.
Great article, Jesse! Just to add, as a composer and an artist: (for me at least) Art has nothing to do with choices. Choices are a function of the conscious mind (executive decision-making and all that good frontal lobe stuff). Art emanates more from our subconscious - the land of dreams, paradoxes and untamed emotions. And perhaps it goes even deeper if we contemplate access to a collective unconscious. So many artists, composers, creators of every kind experience an almost mystical feeling of channeling and a personal detachment from the creative process. I am no different. So in as much as I try to keep the conscious, the ego, outside of the process (because nothing good comes of them), choices are almost completely out of place and unimportant. I won’t delve into AI here, but we shouldn’t be binary in our views on it.
Touche Jesse.