Reality Studies

Reality Studies

Tech Analysis (AI, QC, Blockchain)

Is OpenAI Hellbent on Destruction? On the Privacy, Security, & Sociopolitical Nightmares of Atlas Browser & Sora 2

Some thoughts on why the latest developments from OpenAI fill me with dread and the responses we should consider.

Jesse Damiani's avatar
Jesse Damiani
Oct 30, 2025
∙ Paid
3
3
Share

It’s always worth checking yourself when you feel a sense of doom. As Panic! At the Disco reminds us, “It’s much better to face these kinds of things with a sense of poise and rationality.” Are your concerns founded? Are they priorities (because boy-o, we don’t lack for things to feel concern about these days). So I’ve been self-evaluating for a bit as my thoughts coalesced.

It started a few weeks ago, as my timeline was increasingly dotted with videos of folks deepfaking OpenAI chief Sam Altman into various scenarios, running the gamut of banal to silly to alarming. Then I started seeing friends and colleagues deepfaking their likenesses into popular IP. A whole host of other Sora-2 trends and viral videos sprung up. The latest is cats with guns.

This is the scenario that I—learning from security researchers much smarter than me—have been concerned about for years, dating back to my reporting on deepfakes and mis-/disinformation in the 2010s and early 2020s. Video generators have been getting exponentially more capable over the past few years, so I knew this moment was going to come at some point. Google’s release of Veo 2 was the first time it felt like it was happening, and Sora 2 has sealed it for me.

I don’t mean to condemn anyone who used Sora 2 this way, or who is curious to. It’s a perfect hack of evolutionary psychology—of course we’d want to try out a tool like this. Moreover, I continue to believe that even the most critical AI scholars have a responsibility to understand how tools work (i.e., ”know your enemy”), and some folks will learn best through direct experimentation. I discuss this in a bit more depth on a recent episode of Urgent Futures with media scholar Danny Snelson (for that and so many other reasons it’s worth a listen/viewing!):

Danny Snelson: The Little Database vs. Large AI Models | #52

Jesse Damiani
·
Sep 23
Danny Snelson: The Little Database vs. Large AI Models | #52

Welcome to the Urgent Futures podcast, the show that finds {signals} in the noise. Each week, I sit down with leading thinkers whose research, concepts, and questions clarify the chaos, from culture to the cosmos.

Read full story

Still, it has filled me with dread. And then a few days ago it got worse. OpenAI launched Atlas, it’s “agentic browser,” a chromium browser that has ChatGPT built in natively to do tasks for you. An example of what this looks like in practice: you find a recipe you like and then ask ChatGPT in the sidebar to assemble your Instacart order for you. Now apply this to every facet of your web experience and you have a sense of the allure of such a tool.

Like “deja vu all over again,” I witnessed folks in my life barrel headfirst into what feels like genuine nightmare scenario, reporting on their experiences with Atlas with breathless glee. Many began grandly proclaiming that this was The Future™.

Let’s leave aside all the hand-wringing about the hypothetical rise of “artificial general intelligence,” or AGI, at some point in the future; the developments listed above clock in as two critical risks AI poses to us right now.

Why Atlas and Sora 2 Scare Me

So let’s get into it. Why is this such a big deal? For different but related reasons, I’m concerned about the development of both of these products. How should we think through all of this and navigate it? And more fundamentally, why is OpenAI shipping products that hold questionable benefit but pose clear risks?

Though I think the scariest possibilities lie with Atlas, Sora 2’s impact to video is more visceral, so we’ll start there.

Sora 2 and the Deluge

You might guess that my biggest concern about realistic synthetic media is the usual one around its enablement of disinformation in which public figures are portrayed saying/doing things they didn’t actually do. This is indeed an evolving dimension of daily life we’ll now contend with—and that’s not ideal—but on balance these fears are overstated relative to what I think the thornier problems are.

Yes, you’re probably going to end up with egg on your face for rage-reposting a public figure saying or doing something they didn’t in the months and years to come, but in the end these are relatively easy to debunk. The damage is basically the whack-a-mole we all now have to play in coming to the truth. That’s annoying, yes—deeply so—but relatively low on the existential risk register.

One of my biggest concerns is actually the inverse of this scenario: how bad actors will weaponize the existence of generative video models to dismiss human-recorded footage. For example, that Sora is emerging while Republicans continue to use every tool in their toolkit to block the release of the Epstein Files is literarily problematic. Say Congress finally does succeed in forcing the files into the public eye, and it includes more imagery such as the Epstein birthday book—now anyone implicated will be able to claim that these are faked assets as part of a “witch hunt.” It doesn’t matter that it won’t be true, or that many will immediately know it’s not true—it will create a storm of diverging opinions and arguments that distract from justice being served (especially in a context in which the cards are stacked against those trying to hold the perpetrators to account).

If you think the folks in your life need to understand everything here, please send this post to them:

Share

Which brings us to another big concern—and I recognize it might sound banal at first—the volume of “bullshit” (in the Frankfutrian sense) that will soon exist in our digital lives. Both our respective individual attention spans and our ability to collectively organize around any one issue have already been fractured and atomized by algorithmic social media, which incentivizes engagement over civility. The rise of fast, cheap, plausible synthetic video is an accelerant to these tendencies, with some unique attributes that further exacerbate the situation. Everyone having the ability to, within seconds, produce plausible videos of anything simply means we’re going to spend ever more of our time swimming through the slop ocean, less being exposed to those trying to use their channels for positive, sensible, or just ends.

It goes further. Until now, there has been a fragile consensus that if a given video looked “real” enough, it probably was. There’s a sense in which this has always been a lie—we’ve been proverbially “photoshopping” photography since the birth of the medium (to the best of our knowledge, that honor goes to Hippolyte Bayard in 1839; see the video below)—but that collective agreement meant that we didn’t feel the need to treat each and every video as detective work, sleuthing out whether or not it was captured in the physical world. Sure, we did this in isolated scenarios (e.g., the moon landing video, JFK assassination, and more recently the Charlie Kirk assassination videos), but we weren’t forced to implement it as a broad practice.

No longer. Now even the most mundane video will necessarily involve a skeptical filter in which we have to ask and determine the veracity and creator of a given video—what Stephen Marche calls the “Big Blur.” I encountered this with a colleague the other day, who explained to me that they’d seen a video of Trump promising student loan debt forgiveness. “Was that real or AI?” they asked. I said that I didn’t know—I hadn’t seen the video—but it doesn’t track with any of Trump’s positions, so it seemed unlikely to be real.

This is where the forensic aspect comes into stark relief; we’re so keyed up to think about politicians being deepfaked to say damning things that “get them in trouble,” but what about throwaway “good” things that distract us from the real actions public figures are taking? It’s this register I worry about—the ways that existing filter bubbles, biases, hopes, or otherwise can be confirmed and enclosed, conditioning users to retreat into cozy imaginary realities where, as Kurt Vonnegut famously wrote, “everything was beautiful and nothing hurt.”

I think back to the “Pope in a puffer jacket” incident in 2023, in which many were duped into believing that the Pope had actually been spotted wearing a designer coat because of some deftly crafted Midjourney images. What made this such a perfect example is that: a) there were minimal actual stakes to this, and b) it played on existing biases about hypocrisies in the Catholic church. Thanks to Sora 2, this kind of blurry nothing event is going to become much more common, the net effect of which is that we’ll waste our time and feel dumb when we inevitably get duped.

But there are actual harmful dimensions to this, too. We’ve already seen how deepfaking is being used in ransom attacks and account access. With Sora 2 and tools like it, all a bad actor needs is a few seconds of somebody’s likeness and a whole host of options opens up to them, from scams that cause family members to pay money because they believe a loved one is in trouble, to gaining entry into a user account using deepfaked content to successfully “trick” two-/multi-factor authentication services.

Sora 2 isn’t the only player here, but what concerns me is the simple fact that Sora 2 is the best publicly available option for these kinds of efforts. At a minimum, I’d encourage you to establish passwords with family and friends now, so that if you or they are the viction of these scams (as they inveitably increase), you have a built-in mechanism by which to determine if they’re real (and remember, statistically speaking, they’re not!).

If you still plan to use Sora 2, I’d ask you to consider a basic ethical economy of use. Does your individual usage contribute to “good” in the world? Do the benefits of your synthetic video outweigh the fact that your use will encourage others to make their own? Are you clearly framing how and why you use these tools to embody your ideals of ethical use?

Atlas and the Dangers of ‘Agentic Browsers’

And then we have Atlas, OpenAI’s new ‘agentic browser,’ which as I mentioned, is even more concerning to me than Sora 2.

To understand how these work, just imagine if you had a helpful AI agent embedded in your browser to undertake tasks on your behalf. With Atlas, ChatGPT is built in, so you can ask it to handle tasks like manual entry (e.g., assemble a spreadsheet using data from this article), research (e.g., find ideal sources of information on a given topic), and composing summaries of articles.

It might sound nice, but under the hood, this is a massive security risk for you.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jesse Damiani
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture