Discover more from Reality Studies
How Future Histories Of ‘Other Intelligences’ Clarify Today’s AI
In The Institute for Other Intelligences, Mashinka Firunts Hakopian combines speculative fiction and media theory to propose equitable futures for intelligent life.
An earlier version of this article appeared in Forbes.
Generative artificial intelligence, a loose category to describe a variety of machine learning techniques that produce text, images, video, and audio, and other content, has provoked heated debate about the relationship between humans and machines.
Conversations about AI safety, ethics, biased data, and algorithmic oppression are not new; over the past decade, leading voices like Emily M. Bender, Joy Buolamwini, Rumman Chowdhury, Virginia Eubanks, Timnit Gebru, Margaret Mitchell, Cathy O’Neil, among many others (some listed below) have brought these issues to light through publications, presentations, and other forms of public outreach. These concerns, which foreground how machine learning might be used to reinforce systemic inequality and reproduce asymmetrical power structures, remain issues of critical importance as novel forms of AI technologies become publicly accessible.
Reality Studies is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
What feels new about about the past year is the ascent of Singularitarianism, which is concerned with the existential risk that generative AI, particularly large language models, pose to humanity’s survival. In this framing, AI evolves from a narrow to general to super intelligence, at which point it explodes past human capability. Some proponents believe current-generation models have already reached so-called artificial general intelligence (AGI) — and that with an exponential growth curve, the dangers of artificial super intelligence could be just over the horizon.
From open letters calling for a pause in all large language model development to “godfathers” of AI changing course in their careers to warn the public about AI risks, the tenor of these conversations often veers apocalyptic, which I’ve claimed leaves many people feeling powerless to influence a seemingly inevitable doomed outcome.
Well-intentioned as these efforts might be, what if they interfere with other ways we could imagine the future of machine intelligence — and of intelligence broadly?
In The Institute for Other Intelligences, a hybrid work that incorporates speculative fiction and media theory, author Mashinka Firunts Hakopian presents a radically different view: an imagined future in which AI agents gather for a yearly academic symposium on algorithmic equity. The story unfolds as a series of transcripts from one convening. Readers work inductively, experiencing a future in which the problems of today’s AI are distant memories, solved long ago through thoughtful engagement. Each chapter elaborates on the issues that were critical to address in order to bring about a just world in which algorithmic intelligences proliferate — and the writers, scholars, and artists whose work from the “past” provided the means by which to understand them.
Hakopian’s framing offers a long view of machine intelligence, in which it evolved toward equitable, prosocial futures, making the The Institute a must-read for anyone hoping for a more nuanced understanding of the state of AI today. The book sidesteps the current hype-doom binary associated with AI discourse, instead highlighting the societal challenges that the introduction of AI tools can pose — and the ways we might address them today in order to bring about sovereign, aligned bots in the future.
The book is a work of speculative fiction, and these insights are presented narratively, with dashes of playfulness and intimacy. Storytelling is often an ideal vehicle for sensemaking on complex topics — including those that could credibly pose existential risk — than alarmist rants, and The Institute is testament to this logic. The public dialogue about AI has intensified even in the few months following its publication, so I sat down with the author for a wide-ranging conversation about the role of these technologies in the production of art, contexts for creative labor, and her approaches to storytelling within this complex subject matter.
Jesse Damiani: How should we think about the role of human creativity and creative labor in the context of persistent, ubiquitous, and constantly improving machine intelligence?
Mashinka Hakopian: Debates around generative adversarial networks and image generators have, by and large, been grounded in the extraction of artists’ labor through the use of their images in training data. Fewer of these debates attend to the labor of the data workers who are training models, and the labor conditions under which those models are being trained. This is also true for large language models. Much of the conversation about ChatGPT has focused on its implications for writers’ labor. While this is inarguably an urgent conversation, it has often occluded other conversations around, for example, the labor of data labelers in Kenya who were contracted to make ChatGPT less toxic and paid less than $2 an hour to do so.
To address the topic of human creativity: assumptions about imperiled creativity in the context of generative adversarial networks and large language models often invoke a model of creativity that we’d do well to discard. Consider Arthur Miller’s The Artist in the Machine. In this book, he attempts to answer, once and for all, the question of whether automated systems can be ascribed human creativity. To furnish that response, he codifies the characteristics of existing “geniuses” (the book’s list includes figures like Picasso, Georges Braque, Philip Glass…Peter Thiel), then assesses whether computational systems can produce or approximate those characteristics.
That methodology is emblematic of broader conversations around creativity and AI, which are often asking, essentially: “Can AI mimic the most hyper-visible figures in a given canon?” Or, “Can AI produce output that has already been ascribed cultural value?” Creativity has been defined largely in relation to the creative output of canonical figures — with canonicity sketched through a Western and Eurocentric lens. What continues to be left out of the [generative AI] conversation is: which forms of visuality and whose visions are being highlighted or reproduced or extracted or remixed in the outputs we're seeing now?
I wrote about this for a forthcoming essay in the journal, AI & Society, focusing on the strange paradox of imputing hyper-novelty to AI tools, when it is often the case that what they generate is a reproduction of existing canons.
JD: What do you imagine are strategies, practices, or ideas that we will need as we're confronting the question of displaced human creative labor under the late capitalism?
MH: We often ask, “how will artists and designers respond to the radical transformations that we're seeing?” What if, instead, we asked, “How will the economic and labor infrastructures in which artists and designers are embedded respond to these transformations?” The impetus is often to place the burden of responsibility on individual practitioners and cultural workers to “harness the emergent capabilities of these tool,” to “rise to meet the conditions of this inflection point,” and so forth. The burden of responsibility should instead be on the infrastructural layer of tech companies producing these technologies, the employers and clients who solicit labor in this ecosystem, and the regulatory bodies who should be tasked with preventing the most extractivist scenarios from materializing.
That question is one that we should pose to agents of power within economic and labor infrastructures rather than the practitioners within them.
JD: Are you optimistic that that will happen?
MH: I’m deeply wary of the dystopian death knell of algorithmic determinism. There are spaces of intervention to carve out, cultural workers working collectively can meaningfully impact and transform art labor ecosystems. But it’s undeniable that we're seeing an intensification of extractivism. And, without those interventions, we are likely to see more of it: the radical under-compensation of workers and the devaluation of many forms of creative work.
JD: Economist John Maynard Keynes once predicted we could live a life of “leisure and abundance” with 15-hour work weeks, and the rise of generative tools has prompted a new bout of utopian dreaming that such a context would allow people to pursue their creative passions outside of doing so for their livelihood. Obviously we’re nowhere near that reality, but some might argue that we’ll never get there if people keep doing “bullshit jobs,” which includes some categories of creative labor. The counterargument is that the current way that these systems are being deployed will only serve accelerationism, which necessitates an approach that will harm people’s livelihood without offering an alternative means of making a living. How are you thinking about all this?
MH: To be liberated from work that can be performed by automated systems, we would need to also have labor infrastructures calibrated for liberation — economic systems designed for equitable resource distribution, particularly for workers who would be disproportionately impacted by the transformations of labor.
JD: Which perspectives are not being included in these discussions and what is risked in not including them?
MH: I’d like to answer this question in 15 different ways. Some of my recent research has focused on models that preceded Stable Diffusion, Midjourney and DALL-E. Models like AICAN, which was developed at Rutgers Art and AI Lab, or the work done by OBVIOUS, the French collective who produced the first algorithmically-generated work to be sold by Christie’s in 2018. (Notably, that work was enabled by open-source code taken without attribution from the then-19-year-old creator Robbie Barrat.) Tracking the responses to those projects shows that determinations about whether AI can engage in art-making were being negotiated in relation to AI systems trained on Western art history. In other words, conflating art with the Western art historical canon. What continues to be left out of the conversation is which forms of visuality, whose visions, are being highlighted or reproduced or extracted or remixed in the output that we're seeing now.
JD: You use a novel framing in The Institute for Other Intelligences that combines media theory with speculative fiction, projecting us into the distant future where many of the present concerns about technology and power have been addressed. In that way it was alleviating as a reading experience, though the subject matter, presented as a futuristic academic symposium, is still quite rigorous. What was the creative process in developing these ideas and presenting them this way?
MH: I began developing this project when I was a senior researcher in AI ethics producing weekly reports in a tech context. That work involved following the discourse very closely at the granular level each week, and cherry-picking the most concerning and transformative developments in a given period. It began to feel vitally necessary to envision something else. In my research and pedagogy, I also focus heavily on planetary futurisms and speculative forms of making. I wanted to be very careful not to present a utopian vision of the future that absolves us of accountability in the present, or suggests that someone, somewhere, someday will fix it. While compiling these weekly reports, I was regularly experiencing what became a habitualized sense of disbelief. I wanted to conjure that disbelief from the perspective of future agents situated in a scenario so different from our own that it would be possible for them to look back at our present moment and think, how was this possible?
Part of what draws me to AI research is a longstanding interest in how knowledge production occurs more broadly. My earlier research looked at artist-activist groups and the radical pedagogy they invoked through the form of lecture-performance. So I was thinking about how to embody knowledge, how to denaturalize the forms through which knowledge is produced and reject the “view from nowhere” model of technology. The academic symposium is such a naturalized packaging of knowledge in a standardized unit of time and experience. I wanted to denaturalize not only the knowledge represented in AI systems, but also the knowledge-making represented in the standardized academic context.
JD: Could you elaborate a bit more on the “view from nowhere” stance you critique in the book, which is of particular relevance now as we're entering a paradigm of large deep learning models?
MH: Critiques of the “view from nowhere” in the AI context are inherited from feminist technoscience, from the writing of people like Donna Haraway, who advocates for situated knowledges. Dominant models of visuality of knowledge-making hinge on disembodied, Cartesian, floating orbs in the sky that know without sensing, without feeling, without being an embodied figure in the world. Popular imaginaries of AI inherit these models by envisioning AI as a neutral, objective, knowing agent, whose knowledge is unimpeachable because it is automated. So many thinkers have presenting crucial critiques of that model — Ruha Benjamin, Safiya Noble, Sasha Costanza-Chock, Lauren Klein and Catherine D’Ignazio, among many others. That model enables the magical thinking of automation bias, which proposes that the outcome or decisionmaking of any given automated system can be trusted because it’s automated, and therefore has the mark of absolute truth or neutrality.
The book performs this critique by writing from the perspective of artificial killjoys in the future — from bots who have been coded to prioritize embodied knowledge and ancestral knowledge.
Reality Studies. is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
JD: That was another thing I found myself thinking about, the agentic properties of bots. We often refer to bots derisively now, because they’re closely associated with automation and trolling on social media. Some of this gets back to the original etymology of the word “robot,” which speaks to servitude and forced labor. With the artificial killjoys you're proposing a time in the future in which bots are now sovereign beings capable of both understanding the ways they've been coded, but also imagining new ideas about themselves. There’s a lot of talk right now about the potential of autonomous agents through AI, but of course that’s all still speculative. Can you share a bit about your own thinking regarding bots and the creative choices to imagine them as autonomous?
MH: I love that you surface the bot as an entity that's historically undervalued and a term that’s often used in a pejorative sense. I was thinking about what it might mean to elevate the knowledge of a bot — not as an argument for the singularity, and most emphatically not as an argument for the inevitability of artificial general intelligence — but instead to point to exactly that dynamic: the tendency to consider the knowledge of a bot as inferior to the knowledge of a human, to undermine an anthropocentric view of knowledge and intelligence. There's an addendum at the end of the book, “A Note on ‘The Human,’” that offers a brief gloss of why the human has emerged as the privileged category of being in Western epistemologies. The addendum traces the concept of the human to sites of colonial encounter, where a model of the human emerges in direct opposition to nature, racialized others, and technology — and emerges as the privileged term in that constellation. In valuing the knowledge of the bot, I was also pointing to the forms of undervalued knowledge that circulate without the attention that they deserve. I was thinking in particular about the knowledge of the feminist killjoy. The feminist killjoy is a figure sketched by Sarah Ahmed, a figure who articulates conditions of injustice that otherwise dwell in silence, and who does that through embodied knowledge, through the feeling of something being amiss in a given social space — and that feeling is not held by the other participants in a given social scenario, and therefore the knowledge produced by that feeling is devalued. I was trying to imagine a space for centering and foregrounding otherwise undervalued knowledges.
JD: Some of humanity’s largest-scale contemporary problems have to do with externalities of evolutionarily encoded tendencies, such as competition and scarcity mindsets. Do you imagine bots of this sort being able to create the kind of knowledge and opportunity spaces for our species to begin to amend some of the negative externalities associated with these traits?
MH: One line of speculative inquiry that I followed while writing this book was to imagine what it might look like to encounter a scenario where non-human agents are working in coalition and collaboration with human agents, and operating from a position and relation of care. Another addendum at the end of the book is a “Note on the Neural Architecture of Memory,” where the “other-intelligent” faculty and students of the institute indicate that they continue their work in order to perform offsite data storage for intergenerational memories as an act of care.
JD: You invert the idea of footnotes and references in The Institute by placing them early on in the text and taking up multiple pages. It’s a way of formally illuminating some of the ideas you're trying to embody about training data, and the ways in which we encounter the outputs of datasets in the world. Are there particular questions, takeaways, or insights that you would imagine an ideal reader would have when they finish reading the book?
MH: It was important to include that training data disclosure for two reasons. First, because it feeds into the broader conceit of the text being the product of nonhuman agents who have been trained on a certain dataset. And if that were the case, then a transparent presentation of their knowledge would require also presenting their training data. Secondly, I wanted to provide a constellation of resources for readers who may be coming to this material for the first time. Becca Lofchie’s design for the book contributes so much to generating the book’s sensorium and speculative scenario; I’d be remiss not to recognize the enormous role she played in giving the project visual form. This is also the case for the speculative diagrams produced by Fernando Diaz, who was a core interlocutor for the project.
With respect to key takeaways: the possibility of a future coded by artificial killjoys.
THE INSTITUTE FOR OTHER INTELLIGENCES
By Mashinka Firunts Hakopian
136 pp. X Artists' Books. $18.