AI, on the other hand, is the most embarrassing thing we’ve ever invented in mankind’s time on Earth. Oh, so you can’t do the work? Is that what you’re telling me? You can’t figure it out. This seems to be the justification of AI – I couldn’t do it. This is something to be embarrassed about. The ad campaign for ChatGPT should be the opposite of Nike. You just can’t do it.
-Jerry Seinfeld, Duke Commencement Ceremony 2024
Like any stand-up comedy, Jerry Seinfeld’s commencement address at Duke had ups, downs, and people who walked out. In the headlines that covered the speech as “good” or “bad,” it was easy to miss the moment Seinfeld attacked the AI industry for offering a new level of power that consumers simply can’t resist.
This new revolution in what Seinfeld believes to be laziness and weakness is spreading fast, especially on college campuses. Somewhere around 200 million people use OpenAI’s ChatGPT every month and most college students are frequent users:
96% use ChatGPT for schoolwork
69% use the tool for help with writing assignments
29% have used ChatGPT write entire essays
86% say their ChatGPT use has gone undetected
Seinfeld thinks AI is “something to be embarrassed about.”
[We are] so obsessed with getting to the answer, completing the project, producing a result which are all valid things, but not where the richness of the human experience lies. The only two things you ever need to pay attention to in life are work and love. Things that are self-justified in the experience and who cares about the result.
Stop rushing to what you perceive as some valuable endpoint. Learn to enjoy the expenditure of energy that may or may not be on the correct path.
OpenAI’s big bet - and the estimated 70,000 other AI companies in the world - is that people don’t want to be bothered with that “expenditure of energy” Seinfeld regards with such high esteem. Because what’s the point of the process if the product is a failure? What’s the point of learning something that can’t be shared? In a digital world, the speed to market is what wins money, fame, and power and, increasingly, that speed to market requires quantity more than quality just to stay afloat in the stream that carries everything downriver to oblivion with exponential currents. In an ethereal economy where the consumption of experience is the point, creativity is friction. OpenAI CTO Mira Murati explained it in similar terms when asked about the impact of AI on creative jobs:
Some creative jobs will go away, but maybe they shouldn’t have been there in the first place.
The AI industry, like so many tech-slash-industrial revolutions before, is all about eliminating inefficiencies. But what if efficiency is not a zero-sum game? What if inefficiency is the soul of creativity that makes the best experience for consumption, despite the machine’s best efforts? Seinfeld believes that inefficiency - “the work” not the hustle - is where humanity is happiest, because there is nothing more human than wandering, idling, and daydreaming. This circuitous process is where we find the innovation and exploration that fulfill us, not the algorithm. With too many shortcuts, the journey is no longer the destination: the destination is the destination.
Legendary science fiction author Kurt Vonnegut, who wrote on a typewriter as long as he lived, might put the process of creativity against the efficiency of production another way:
[When Vonnegut tells his wife he's going out to buy an envelope] Oh, she says, well, you're not a poor man. You know, why don't you go online and buy a hundred envelopes and put them in the closet? And so I pretend not to hear her. And go out to get an envelope because I'm going to have a hell of a good time in the process of buying one envelope. I meet a lot of people. And, see some great looking babes. And a fire engine goes by. And I give them the thumbs up. And, and ask a woman what kind of dog that is. And, and I don't know. The moral of the story is, is we're here on Earth to fart around. And, of course, the computers will do us out of that. And, what the computer people don't realize, or they don't care, is we're dancing animals. You know, we love to move around. And, we're not supposed to dance at all anymore.
So does Jerry Seinfeld, like Kurt Vonnegut before him, hate technology?
Given his often tech-forward approach to streaming, social media, and smartphones, that can’t be true. But what’s clear from this commencement address is that Jerry Seinfeld sees AI differently because he suspects there is a thin line between convenience and humanity. And AI is crossing it.
Enter the Doomscape
When OpenAI unleashed ChatGPT on the unsuspecting masses in 2022, a survey found that nearly half of AI researchers believed there’s about a 10% chance their work would lead to human extinction. While it’s admittedly funny that the people building the doomsday machine think there’s a less-than-zero chance the machine will end humanity but keep tinkering away, the average American in 2024 estimates the chances of our extinction via AI to be much higher: 39% of Americans are concerned that AI is going to end humanity.
For the STEMers in the AI industry itself, there are two sides to the debate: techno-optimists and AI skeptics. As The New Yorker reported in March:
A camp of techno-optimists rebuffs A.I. doomerism with old-fashioned libertarian boomerism, insisting that all the hand-wringing about existential risk is a kind of mass hysteria. They call themselves “effective accelerationists,” or e/accs (pronounced “e-acks”), and they believe A.I. will usher in a utopian future—interstellar travel, the end of disease—as long as the worriers get out of the way. On social media, they troll doomsayers as “decels,” “psyops,” “basically terrorists,” or, worst of all, “regulation-loving bureaucrats.” “We must steal the fire of intelligence from the gods [and] use it to propel humanity towards the stars,” a leading e/acc recently tweeted. (And then there are the normies, based anywhere other than the Bay Area or the Internet, who have mostly tuned out the debate, attributing it to sci-fi fume-huffing or corporate hot air.)
The tension seems tightest in Hayes “Cerebral” Valley, a pleasant and grassy suburb often cosplaying as a city known as San Francisco and the copy-and-paste neighborhoods of Silicon Valley. This is where journalist Andrew Marantz encounters AI researchers who share houses, partners, and daycare duties while talking about something called “the alignment problem,” a euphemism so dull that only people calling themselves effective accelerationists could have imagined it (a school of thought with an edginess equitable to the effort of coming up with a fancy way to say ‘people who do stuff fast’ whose most successful advocate is a curly-haired buffoon serving twenty-five years in prison for doing stuff just a little too fast). Then again, Marantz notes that AI skeptics are often derided by this painfully phoneticized group as “decels,” which is genuinely clever. So, like any denomination, there are clearly degrees to the movement. I just want to know if the backslash is silent in “e/acc” and would love to introduce myself as a big proponent of “e-backslash-ack” the next time I’m in the Bay Area.
So as Seinfeld would ask: what’s the deal with the alignment problem?
The people who urge caution - “decels” - are worried that AI could begin to optimize for efficiency so violently that humanity is viewed as an obstacle to the objective and therefore must be categorically eliminated for the greater good.
As The New Yorker puts it:
Imagine a world in which more powerful A.I.s pilot actual boats—and cars, and military drones—or where a quant trader can instruct a proprietary A.I. to come up with some creative ways to increase the value of her stock portfolio. Maybe the A.I. will infer that the best way to juice the market is to disable the Eastern Seaboard’s power grid, or to goad North Korea into a world war. Even if the trader tries to specify the right reward functions (Don’t break any laws; make sure no one gets hurt), she can always make mistakes.
Most AI doomsday scenarios read like the back of a sci-fi book that AI would write. Fewer scenarios focus on the apocalypse of the spirit if AI teaches people to treat every experience as a product, instead of a process - but that’s because the people who would think of these scenarios aren’t usually in the AI industry in the first place. This is exactly the same blindness so apparent in the Web3 industry where believers periodically convince themselves that cryptocurrencies and blockchain are the future instead of financial vehicles and, depending on the price of Bitcoin, passionately pretend that decentralized systems will be the future of the internet while earning salaries and cashing the dividends of their speculations into USD as soon as the market peaks.
It’s not the hopeful delusions of Web3 or AI proponents that are sinful here - delusions in a digital culture are clearly our only way to find the divine - it’s the absolute disregard of these STEM-first revolutions for the general public interest. Most Web3 companies never even attempted the faintest flirtation to acquire general, recurring users to prove the point of their purpose. But that one cool villain from that James Bond is right: The distance between insanity and genius is measured only by success.
Because the main difference between Web3 and the AI industry is that people are actually using the stuff, because it makes life easier and it’s easy to understand. But, as OpenAI’s CTO reminds us, there’s still a menace to the techno-optimist’s march because it seems so categorically determined to leave the inefficiencies of moody creatives and affable people persons behind for good by following the classic e/acc philosophy: build fast and steal things.
The “decels” don’t decry this destiny so much as fret about the speed required to pursue a perfect product: AI skeptics see an apocalypse caused by the output. The general population is worried about the input: what we lose in giving away everything to the machine.
There’s no denying that AI is fun and useful and revolutionary. Many of these same worries haunted Web1 and Web2 in cold splatters of ectoplasmic effusions. Maybe the wholesale desecration of old rituals is always the price of progress to a better tomorrow, whatever all these ghosts of the future present say. Maybe, contrary to Seinfeld’s concerns, humans won’t lose the joy of process to AI so much as gain a new one. I know this is how I feel when I use Canva to generate images for Litverse like the one above in less than thirty seconds. Is that output so bad? Doesn’t it improve the whole of the experience? Is it even plagiarism, given that someone could search paintings online and get inspired enough to do something related to but not remotely resembling a certain style, theme, or subject? Isn’t it all just part of a greater process to produce Litverse’s amazing, completely non-AI prose? Maybe AI companies are Robin Hoods fighting for a Marxist utopia where everyone has the exact same ability to summon dreams to screen by typing in a chatbox.
This is the STEM argument.
The human argument looks at the future of people, not progress. If a third of college students have used ChatGPT to write entire essays and gotten away with it, we are seeing the devaluation of literature, critical thinking, and writing all at once. The missing question of the survey is whether the students even read the essays before submitting them. But maybe the answer is easy: if 96% of students are using AI to help with their education, education is only about the output: a degree. The inputs that lead to a greater consciousness can not be shared, measured, or sold and are therefore inefficient.
But what’s so bad about efficiency? Isn’t getting more efficient just cutting out the flabby mid-section of experience where we know what we want, but we have to work for it? Doesn’t more efficiency give us more time to do the things we actually want to do?
For Litverse, yes. I am never going to feel bad about using AI for images, because I like how they turn out. It’s faster and better. But AI isn’t just about cool images. This is STEM-like, output-based thinking.
Technology can act as a tool or trough. Just look at Apple: in 1981, Steve Jobs - a rare humanities CEO who shows like no other the inefficiency of creativity as a necessary element to any true revolution - said that technology would be a “bicycle for our minds.” It’s a beautiful and inspirational message that speaks to the consumer use case, rather than the product use case. Today, the average American spends almost five hours a day on the phone that his genius built. Are they using their bicycles to reach new heights or to spin in place? Is AI a motorcycle in this analogy? Or is generative AI speeding things up so fast that all human creativity and inspiration will be left behind, breathless, just like our free time as soon as we connect to these always-on distraction pocket machines?
Either way, Seinfeld, decels and many Americans seem to have the same ultimate concern about AI: what could be lost in the sheer speed of it.
What isn’t contestable is that the debate is far too abstract. What we’re really losing to AI so far is the power grid and the water supply. Maybe the e/acc movement should think about what’s actually being used to accelerate the machine instead of what’s accelerated by it.
As Seinfeld summed it up at Duke University:
What I like is we’re smart enough to invent [AI] and dumb enough to need it. And still so stupid we can’t figure out if we did the right thing.
Like essays about tech’s effect on the human spirit?
Read “How Steve Jobs Got It Wrong.”
Like essays about AI?
Even-handed and honestly inquisitive take on the dialectics and paradoxes AI generates, along with its new content. The fact that OpenAI engineers are anonymously raising red flags over their company's lack of guardrails has me closer to a decel position; I can also envision a world in which AI is a force multiplier for creative endeavors, so I'm sympathetic to e-acc arguments.
Loved that New Yorker piece - and always good to encounter a little new wit and wisdom from Vonnegut 😁
Great post! The last line sums it up perfectly: Smart enough to invent (AI), Dumb enough to need it. And Still so Stupid we can't figure out if we did the right thing. Gotta love the "Sein". Still, a lot to think about here. Thanks for this.