When We Walk Away From the Lying Machines
or: a pronouncement of doom and hope, tied into talking trash on AI
AI might destroy civilization.
AI might destroy civilization, or at least the internet, but I don’t want to get your hopes up.
There’s something about AI that brings out the grand sweeping statements, the proclamations of new divinity or doom. There are people, people in power, who believe that the copy-paste machines they’ve built will soon ascend to godhood, and those people in power take reckless action presuming that to be true. Then there are people like me, writing essays that open with sentences like “AI might destroy civilization.”
I’m probably wrong. Of all the times civilization has shuddered, of all the times civilization has shaken, at least this past millennia or so, it’s held itself together.
But bear with me while I use hyperbole to make a point.
Modern society is built of interconnection, of global trade, of networking standards, of communication, of our ability to verify certain information. It’s built on a generalized sense of trust. That feels at risk.
Adults today, myself included, suffer from credulity. We’re used to the idea that if we pick up the phone and hear our mother’s voice, that we are talking to our mother. We’re used to believing the news when we see clear video of events. Sure, darkroom trickery and photoshop and CGI have existed for quite some time, but the overwhelming majority of the photos we’ve seen are actual representations of what the camera saw. If we see a single photo, we might not trust it. If we see photos and videos from multiple angles, and video testimony from multiple witnesses, we’ll believe it.
We suffer from credulity, and for us, reality is breaking down right now, because we’re so used to believing the evidence of our own eyes. All of us (or maybe it’s just me) have reposted funny or strange videos on social media, only to learn later (to use a specific example) that kangaroos cannot, in fact, jump backwards.
Because we tend to believe what we see, we’re easily siloed off by social media algorithms that feed us distinct visions of reality. Just this past week, I saw both an AI video of someone outrunning ICE while wearing a ridiculous costume and I saw an AI photo of grateful residents of a city greeting ICE with free coffee and welcome signs. Whatever you already believe, there’s AI slop for you to lap up.
This siloing of reality was already a problem, and it’s one that will only get worse as false images and videos become easier to produce and harder to detect. “Media literacy” is getting harder to train for. AI is improving faster than the public is learning to detect it. This siloing of reality is the problem I’ve been aware of for awhile, the one I hear people talking about. Right now, the problem is that we tend to believe what we see.
The next generations will, I believe, suffer from an essentially inverse problem.
A child born today will never experience a world in which there is any reason to believe that realistic images, video, or audio represent reality, or that a person wrote the words that they read. Instead of presuming any photo-realistic image (or credible-sounding audio) is real unless proven otherwise, they might simply assume that every image and text was generated by the monstrous lying machines that sit in warehouses just outside our cities and suck up all our water and power.
A video used for court evidence will seem about as believable as if the prosecutor had shown up with a slideshow of oil paintings. There will be no reason to believe a comment on a forum was written by a human, no reason to suspect that the influencers on YouTube are actual people with actual opinions or actual information. Another player in an online game will be no more likely to be a person than not.
The only verifiable interactions will be those done with people we have met in person, or who have been vetted directly by people we have met in person.
If this culture of disbelief becomes the norm, the ramifications of it are astounding and wide-reaching. In essence, the only verifiable experiences will be face-to-face. Trust will only be found where it is earned, and in a culture ruled by lying machines, honesty and humanity might be valued above all else.
In crime activist circles, we call this a web of trust. Webs of trust are necessary when you’re planning direct actions that put you in harm’s way, but they haven’t traditionally been necessary when you’re trying to look up how to wire a ceiling light or when you want to find out how the war against authoritarianism in Myanmar (or Minnesota) is going.
The internet will be fundamentally transformed, no longer a place where you can learn things, but only a sort of strange collective hallucination. We will be drivers in a fog of disinformation, our paths lit only by the dim headlights of trust.
There are people working on technology like digital signatures, technologies that allow one computer to trust one another. Video cameras might be equipped with signatures that prevent (or complicate) spoofing. But I’m skeptical that this technology will be consistently reliable, let alone user-friendly enough to generalize far enough to turn the tide against disinformation.
I’m wary of anyone making grand pronouncements of doom, and you should be wary of me. But the best case scenario I can imagine puts us roughly back to the 18th century, before the advent of photography. Yet while newspapers and journalists have always been biased (and were more dramatically so before the 20th century), it used to be that you could at least trust that this or that written piece of news was written by a person, that it conveys if nothing else the opinion of the writer. Now, unless you trust the byline, there is no reason to trust it at all.
In the world of fiction publishing, we’re already several years into this collapse. No one can be bothered to read stories that no one has bothered to write, and fiction magazines tend to have fairly strict no-AI policies. Those policies are getting harder and harder to enforce. The best advantage I have as a writer, submitting to those magazines, is that people know who I am. I have a decent reputation, because no one has accused me of doctoring my writing with AI (though I sure do love emdashes, I must admit). I can’t imagine how much harder it is to break into the field today, and it’s only going to get worse.
Trust is already a social currency. This is the reason I do not share any fundraiser that was not personally vetted by either me or someone I know well and trust. The value of trust as a social currency, though, will only go up. People who trust information they’ve read on the internet, or any photos or videos they’ve seen, will seem hopelessly naive.
There’s a sort of beauty to the less credulous culture we might build, though. Science fiction has been promising us a fully-online dystopia for decades now, one in which people are fed slop and lap it up and let the physical world fall to ruin. I’m a professional optimist, and I don’t think that’s what will happen. Sure, my own generation, and even the zoomers, might not adapt, but younger people likely will.
The only logical response to an internet full of utter disinformation is to return to face-to-face interactions. Live theater and music will be all the more important. Journalists might give more talks in person about what they’ve seen. If we’re desperate for influencers and hot takes, we might even see the return of one of the oldest professions in the world—soapboxers, who stand on street corners to rant as entertainingly as possible on this or that subject. (Seriously, I was shocked by how universal this form of entertainment was until the advent of radio. A good soapboxer was a theorist and a comedian at the same time, able to captivate crowds of dozens or hundreds). Maybe I’ll move from podcasting to live storytelling. Maybe I’ll be happier, albeit likely poorer.
We might talk to one another more. We might build trust. We might build community. We might learn to prioritize the local. We might make things with our hands and speak with our mouths. If I’m going to make utopian pronouncements, I’ll say maybe we’ll learn to govern ourselves with local councils, then federate those local councils and build an egalitarian society from the bottom up. Maybe we’ll extend this culture of distrust so far that we stop trusting capitalists and stop trusting the people promising us authoritarian alternatives to capitalism.
Maybe this grand reset of trust will lead to a grand rebuilding. Maybe we’ll build a better world, largely (but probably not completely) offline.
And I’ll have to find a new job.
But that’s alright with me.


I gotta say--today's my birthday (and an ends-in-a-zero stock-taking kind of birthday) and my imagination was very ready for a little slap on the side of the head to clear its reception, a la an old-school television. Thanks for the jolt, the hope, and my new favorite phrase for AI, "copy-paste machines".
Along the lines of "A child born today will never experience a world in which there is any reason to believe that realistic images, video, or audio represent reality..."
I have an 8 year old. Born before AI, but coming of age in the world of the lying machines. This weekend, we showed him Star Wars: A New Hope. His first comment, re R2D2 and C3P0 was, "Why is the animation so janky?" Because it did not occur to him that what he was seeing was live action filmed images of guys in weird little outfits. The lack of CGI smoothness seemed jarring to him. So I think that to an extent, all of this is already happening. Animation and CGI are so photorealistic now that a kid born in 2017 assumes any fantastical image must be permeated with artifice. Images I saw as a kid in the 80s and 90s and thought were magical (leading me to eventually work in film and TV production) seem to my 21st century child to be just another variation on artifice.
Like you, I'm both intrigued and deeply concerned about what this means for society. I don't know if AI is going to literally "destroy" society, but mass media is fucked. And I'm gonna be out of a job.