Why AI Became Boring Already
On The Beauty of Being Human in the Age of Synthetic Crap
“The real problem is not whether machines think but whether men do.” — B.F. Skinner
It took roughly seventy years of academic hand-wringing, government funding, corporate salivation, and the quiet desperation of thousands of computer scientists eating cold pizza in fluorescent-lit labs to get from the first stuttering attempts at artificial intelligence in the 1950s to the moment in late 2022 when OpenAI unleashed ChatGPT on an unsuspecting public and every person with a LinkedIn account suddenly became an “AI thought leader.”
Seventy years of incremental, painstaking, often embarrassingly overpromised progress — and then, in the space of about four years, machines learned to paint, to code, to draw, to fabricate entire visual realities so convincing that your aunt shares them on Facebook believing they are photographs of the Pope in a puffer jacket. Four years. The acceleration was dizzying. The hype was industrial. The breathless headlines could have powered a small nation. And what is perhaps the most delicious, most perfectly human thing about this entire saga is that it took the general public approximately two months to become completely, irreversibly, yawningly bored of it all.
Let that sink in for a moment. Humanity spent the better part of a century trying to build a thinking machine, and when it finally arrived — dressed in a chat window, speaking in that annoyingly polite tone, ready to write your emails and draft your wedding vows and explain quantum mechanics to your nine-year-old — we treated it the way we treat every miracle: we poked it, we screenshot’d the funny responses, we posted them to Twitter, and then we moved on to whatever new dopamine hit was waiting in the next tab. The novelty wore off faster than a cheap cologne.
When GPT-3 landed with its 175 billion parameters, the world called it revolutionary, world-changing, the dawning of a new epoch in human cognition. One year later, OpenAI released GPT-4 — 1.8 trillion parameters, a machine of genuinely staggering sophistication — and people called it clunky. Slower and dumber than expected, they said, as if they were sending back an undercooked steak. The entitlement was breathtaking. The speed of the disenchantment was almost beautiful.
And here lies the thesis of this particular dispatch from the collapsing front lines of digital culture: artificial intelligence is moving faster than any technology in recorded history, but human behavior — a gloriously irrational, magnificently stubborn, infuriatingly unpredictable engine — is moving faster still. Yes, there are millions of people who still do not know what ChatGPT is, or who believe that artificial intelligence begins and ends with copying and pasting text into a box. But among those who have been introduced, who have tasted the silicon apple, something enormous and largely unspoken is shifting beneath the surface of how we create, how we decide, and how we consume.
Three tectonic behavioral changes are underway right now, and they deserve to be examined with something other than the breathless optimism of a venture capitalist on a podcast.
The first, and perhaps the most viscerally satisfying, is this: AI-generated content has become the new spam, and it happened with a speed that should terrify every tech evangelist who ever used the phrase “democratizing creativity” without flinching.
In 1978, a marketing director named Gary Thuerk — a man whose name deserves to live in infamy alongside the inventors of the robocall and the pop-up ad — sent four hundred unsolicited messages to users of ARPANET, the proto-internet, hawking a new product from Digital Equipment Corporation. The reaction was immediate, furious, and unambiguous: people hated it. They complained bitterly. They told Gary, in no uncertain terms, to never do it again. Gary, naturally, claimed the stunt had generated fourteen million dollars in sales, and thus was born the economic logic that has poisoned digital communication ever since: if even a microscopically small percentage of people respond, the math works, so blast away.
This was the first spam. And spam, at its rotten core, has always relied on a single mechanism of deception — it fakes the impression that genuine human effort, genuine human attention, genuine human care was invested in communicating with you, when in reality the identical message is being firehosed to millions of inboxes simultaneously. The moment you realize the trick, the spell shatters.
By the early 2000s, the positive response rate to a spam campaign had cratered to something in the neighborhood of 0.000036 percent. Governments had to step in. Laws were written. Spam became not merely annoying but legally prosecutable. It took about fifteen years from that first ARPANET message for the culture to coin and fully internalize the word “spam” as a pejorative.
Now transpose that arc onto AI-generated content, and watch it compress like a black hole. We are in 2026. AI video and image generation tools — Sora (now defunct), Runway, Kling, Hailuo, the whole gaudy menagerie — arrived on the scene promising to hand the keys of the creator economy to anyone with a keyboard and a prompt. No cameras needed. No lighting rigs, no hours in the edit suite, no vulnerability required. Just type what you want, and boom: rendered. A shortcut. A cheat code. Creativity without the inconvenience of actually being creative.
And right off the bat, almost instantly, the internet was drowning in what the culture has branded as “slop.” AI slop. It took fifteen years for the word spam to enter the common vocabulary. It took less than twelve months for AI slop to become a universal term of contempt. That is a cultural immune response. A generational antibody kicking in. The public did not need to be educated about why this stuff felt wrong. They felt it in their marrow. You click on anything now and the sensation is instantaneous, visceral, almost physical — a revulsion, a betrayal, that uncanny-valley nausea: this is obviously AI and I want actual images, actual words, actual anything made by a person who sweated and doubted and tried.
Drew Harwell of the Washington Post drew a useful distinction: dumb content and AI slop are not the same animal. Dumb content — a silly TikTok, a lazy meme, a video of someone’s cat falling off a shelf — can be low-effort, even stupid, but it is still made by a human being for other human beings. There is a person behind it. A consciousness, however dim it might flicker in any particular instance.
AI slop is something categorically different. It is content manufactured not for human consumption but for algorithmic consumption — mass-produced with the sole aim of satisfying the engagement metrics of social media platforms. It is not made for you. It is made for the machine that decides what you see. You are not the audience.
And here is where the damage metastasizes beyond mere annoyance into something genuinely corrosive. A fascinating piece by Raptive laid out what should be an obvious truth but apparently needs spelling out: of course people dislike and dismiss content they believe to be AI-generated. But the real poison is that it no longer matters whether the content actually was generated by AI. The suspicion alone is enough. The mere possibility that a machine might have produced what you are looking at is now sufficient to trigger distrust, dismissal, a reflexive turning away. People have stopped believing in real things that actually happened. They discount true stories, genuine footage, authentic moments, because the baseline assumption has shifted. Everything is guilty until proven human.
Real artists — flesh-and-blood people who spent years honing a craft, who pour hours into production, who make themselves vulnerable in front of a camera or a blank page — now have to spend an extraordinary amount of effort simply convincing audiences that they are, in fact, human. The platform Kaiber felt compelled to slap a badge on its service declaring it “made by humans for humans.” Creators who capture genuinely extraordinary real-world moments — footage so stunning it looks impossible — find themselves besieged within hours by accusations in the comments. “That’s AI.” “Fake.” “Rendered.” Someone ran a legitimate video through one of those magnificently useless AI-powered AI detectors and it came back 98.9% fake. The irony could choke you. In its frantic attempt to mimic humanity, artificial intelligence has made it dramatically harder for actual humans to communicate with each other online. The machines tried to pass the Turing test and, in doing so, made every human fail it too.
The linguistic fallout is particularly grim and darkly funny. Adam Alexic and others have documented how people are now actively developing coded language — deliberate misspellings, dropped capitals, avoided punctuation patterns — specifically to signal to other humans that they are not a bot. We know, for instance, that ChatGPT uses the word “delve” at rates astronomically higher than normal human speech, likely because OpenAI outsourced portions of its training process to workers in Nigeria, where “delve” is indeed used more frequently in everyday English. That tiny linguistic overrepresentation got reinforced through the training loop until the model was practically addicted to the word.
And now, multiple studies have found that since ChatGPT launched, humans everywhere have been spontaneously using the word “delve” more frequently in their own writing. The machine shaped the language of the people who trained it, which shaped the machine further, which is now reshaping all of us. We are adding typos on purpose. We are dropping capital letters deliberately. We are avoiding em dashes — that perfectly elegant piece of punctuation — because the machines overuse them and we would rather butcher our own prose than be mistaken for software.
The internet will be saved through the complete AI-assisted shitification and self-immolation of itself. The platforms that optimized engagement for profit created the conditions for the slop epidemic. And the more people learn about AI, the less they trust it — a trajectory that should alarm every company whose business plan includes the phrase “AI-first.”
The second great behavioral shift is quieter, more insidious, and in many ways more terrifying: we are handing over our reasoning to the machines, and we are doing it willingly, eagerly, with the relieved sigh of someone dropping a heavy suitcase at the end of a long journey.
In the near future, and increasingly in the present, every decision we make will involve two decision-makers — ourselves and the AI. And often, the AI will take priority. This sounds like the tagline of my very own dystopian thriller, except it is already your Tuesday afternoon. Think about how you interact with ChatGPT, Claude, Gemini, whatever your chatbot of choice happens to be. It is your assistant. Your review aggregator. Your life coach. Your therapist. Your travel agent. Your research department. You are actively, voluntarily, and with remarkably little hesitation relinquishing vast swathes of your decision-making power to a statistical model that has no understanding of what it is saying and no stake in whether you thrive or perish.
When you need to make a decision — what to buy, where to eat, which product to choose — you no longer want to spend hours reading a hundred reviews, weighing options, consulting friends, exercising that magnificent and underappreciated faculty called judgment. You want the AI to go out, hoover up all available information, and come back with The Answer.
Trust in AI is rising. And the decisions being outsourced are not small ones. Especially among younger people, the questions being fed into the prompt box are staggering in their weight: What kind of car should I buy? What college should I go to? What city should I move to? Who should I marry? We are typing the most consequential questions of our lives into a text box and accepting the output with the credulity of a medieval peasant consulting an oracle.
There is a trend — already well underway and accelerating — of websites, product pages, emails, and entire digital presences being redesigned not for human readers but for AI crawlers. The content is no longer written to persuade you, entertain you, or earn your trust. It is formatted to be machine-readable, optimized for ingestion by the AI that will digest it and regurgitate a recommendation on your behalf. The AI does not care about being charmed. It does not care about a brilliant joke, a compelling narrative, a magnetic personality. It wants data. Data in, data out. Endless, endless, endless details.
And so a new reality is emerging with two distinct audiences: the human, who still responds to emotion, story, and connection, and the AI, which responds to structured information and nothing else. AI is the customer now, and that customer has a different expectation. The AI’s goal is to optimize. It filters anything emotional, anything unmeasurable, anything that makes us gloriously and irrationally human, as pure noise.
However, our most important decisions are not primarily driven by logic. We do not fall in love after running a hundred logical steps. We do not discover our life’s passion through a cost-benefit analysis. We do not choose our closest friends by optimizing for compatibility metrics. The emotional core of human decision-making remains, for now, stubbornly and beautifully ours.
And this is why Jack Conte, the CEO of Patreon, called the weakening of creator communities and distribution channels the single most important problem facing creative people today. Humans are instinctively seeking deeper connection in communities they actually trust. The future, Conte argued, is scattered — fragmented into tight-knit groups bound by genuine affinity rather than algorithmic curation.
If a purchase is a purely rational commodity, the AI will guide you to it efficiently and without drama. But everything else — everything that matters, everything that makes a life feel like a life rather than an optimized spreadsheet — will be driven by the people you know, the brands you trust, the communities where you can drop your critical reasoning and simply react on emotion. Whether this is ultimately a good or bad thing is debatable. But the pattern is already visible: in the hyperspecific subreddits people check obsessively, in the school communities they follow, in the YouTube channels where they have followed their favorite creators for years and trust their opinions above any AI response, no matter how statistically rigorous. The machines are driving us back to each other. There is a bitter poetry in that.
The third shift is the most paradoxical, and it carries within it both the greatest promise and the greatest cruelty: we despise being tricked by AI, but we absolutely demand the spectacle it can deliver. Our expectations for production value are skyrocketing, and they are never, ever coming back down.
Think about the trajectory. The first films were grainy, flickering, silent affairs that astonished audiences simply by existing. The first YouTube video was a nineteen-second clip of a man standing in front of elephants at the zoo. Today, a lone creator with a decent camera and some editing software can produce content that rivals what a television network could manage fifteen years ago. The ceiling has risen relentlessly, and audiences, who are marvelously ungrateful creatures by nature, always want more. But when the capacity to produce hits a wall — when budgets cap out, when technology plateaus, when creative energy flags — the industry does not stop producing. It just produces more of the same. And this is precisely when things curdle into staleness and repetition.
Consider Hollywood in the 1960s, when the studio system was grinding out interchangeable product and audiences were drifting away in boredom. Consider the 2010s, an entire decade where the cinematic landscape was dominated almost exclusively by remakes, spin-offs, sequels, and prequels, a wasteland of intellectual property strip-mining where barely any new original stories were permitted to exist. We complain about this endlessly. We complain that they are lazy, that they refuse to push boundaries, that there are so many stories left untold. And we are right to complain.
Look at the modern streaming blockbuster. They all look the same. They all feel the same. You have not seen this particular film before, technically, but you have seen every single component of it recycled so many times that the experience is less like watching a movie and more like assembling a flat-pack piece of Ikea furniture you have already built twelve times.
The dialogue is punchy in that passive, Marvelish, quip-laden way that passes for wit in a world terrified of genuine emotion. The world-building is just expansive enough to sustain mild curiosity for two hours and never again. These are movie-shaped products, built off the bones of analytics and audience data, reverse-engineered from engagement metrics rather than born from artistic vision. And the economics explain why: for a studio to spend two hundred million dollars on a film, it needs a formula diluted enough to appeal to every demographic on earth simultaneously.
Creativity is not murdered by malice. It is suffocated by the spreadsheet. The need to make a return on investment demands that every rough edge be sanded smooth, every challenging choice be neutralized, every spark of genuine originality be extinguished in favor of reliable, predictable, safe mediocrity that will perform adequately in all territories and offend absolutely no one.
Indie filmmakers — the people who actually go out and try to tell interesting, niche, unique, quirky stories — suffer the most under this regime. Sean Baker, an Oscar-winning director, spoke with painful honesty about the economic impossibility of sustaining the kind of cinema he cares about. The current state of independent film, he said, is simply unsustainable. The people creating the work that generates jobs and revenue for the entire industry can barely get by. The system has to change. But has the appetite for those stories vanished? Absolutely not. It has merely migrated.
YouTube has become the unlikely sanctuary of the stories Hollywood abandoned. Creators like Natalie Lynn, Isabel Paige, and others are producing cinema-quality content — beautiful, personal, strange, specific — because modern cameras have made high production value accessible and YouTube has made distribution essentially free. The entire ecosystem is financially sustainable in a way that indie film increasingly is not. The niche has found its home, and it is not in a cinema.
And this is where the AI revolution becomes genuinely interesting rather than merely noisy. Tools like Sora, Runway, Kling, and the rest have arrived with a thundering promise: anyone can now create Hollywood-level visual content from their bedroom. The immediate cost of spectacular imagery is plummeting toward zero.
One filmmaker spoke about spending five years navigating the traditional Los Angeles gauntlet of pitches, managers, meetings, and near-misses, none of his projects getting made because they were “too weird.” So he turned to AI filmmaking tools — not out of laziness, but out of desperation. The whole point, he said, was to figure out how to tell the stories he wanted to tell that were too strange and too difficult to get made through traditional channels. Whatever is the fastest way to do that while maintaining quality that people actually want to watch. The promise is intoxicating: a creative renaissance where niche, obscure, B-movie plot lines are told with the visual spectacle of a blockbuster, where the only limitation is the originality of the story itself.
The promise is intoxicating. The reality, so far, is largely abhorrent. Where is the new wave of quality? The tools exist. The capability exists. And yet the internet is still glutted with crappy, soulless, algorithmically optimized garbage. The prompting process is maddening — random, unpredictable, endlessly frustrating. The gap between what the technology can theoretically produce and what it actually delivers in practice remains vast. But the expectation has been set, and expectations, once raised, are merciless. People are not merely hoping for a creative renaissance — they are expecting one, demanding one, and they will judge everything that falls short with the pitiless contempt of a generation that has already seen through the trick.
And on the other side of this spectrum — beautifully, inevitably, almost redemptively — something else is happening. As the internet floods with perfect, synthetic, infinite, machine-generated content, a profound counter-shift is emerging in what people actually want. The present moment — lived, embodied, unrepeatable — is the one thing AI cannot generate. Reality.
Even Sam Altman, the CEO of OpenAI, the man who arguably did more than anyone to create this mess, has admitted that in the future there will be a massive premium placed on real, in-person experiences. Human-to-human connection. He does not know what the job title will be, but he can see it becoming an enormous category. And the data is already confirming this.
We are in the middle of a historic boom in live events, concerts, and face-to-face gatherings. Live streamers who step onto a physical stage report that the audience erupts in a way that no online metric can capture. If the internet becomes a synthetic, crowded, and spiritually dead space — and it is barreling in that direction with admirable efficiency — then real life becomes the ultimate luxury. The ultimate scarcity. The ultimate proof of authenticity.
People want to see and sell the sweat. Not just the final product, but the full process — the errors, the mistakes, the vulnerabilities, the hours, the frustration, the doubt, all the messy, inefficient, gloriously human effort that went into making something. The product and the process have merged into a single experience. And this experience — raw, imperfect, alive — is what audiences are now craving with an almost desperate intensity, because it feels like the last truly trusted space remaining. The one corner of culture that the machines cannot convincingly fake. Not yet, anyway.
Could all of this be wrong? Of course. In a few years, AI might evolve so rapidly and so profoundly that it outpaces human adaptation entirely, that our behavioral antibodies prove too slow, that the machines finally learn to fake not just competence but soul. This is completely possible. The pace of change is so violent that predicting anything beyond the next eighteen months feels like reading tea leaves in a hurricane.
But for the time being — for this strange, lurching, exhilarating, nauseating moment in history — human behavior is keeping pace. We are adapting. We are raising our standards with a ferocity that should alarm every AI company counting on passive adoption. We are building generational resistance to synthetic content faster than any previous generation built resistance to any previous form of deception. We are seeking out the brands we trust, the creators we believe in, the communities where we feel known. We are placing an enormous and growing premium on the real, the lived, the imperfect, the human. We are, in our gloriously stubborn and irrational way, refusing to go quietly.
Between us and the machines, for now, you might call it a tie. And frankly, given that one side has trillion-dollar backing and the other side is just a bunch of hairless apes who keep adding typos to their texts on purpose to prove they are alive — a tie feels like a victory.
If you made it this far, and you agree with what I said, why not give this article a like to fight for the real?
How you can support my writing:
Restack, like and share this post via email, text, and social media
Thank you; your support keeps me writing and helps me pay the bills. 🧡


Maybe I'm just a dinosaur who doesn't like consuming the bland, the boring, the beta AI slop but I'd far rather feast on the admirable, the absorbing, the alpha HI generated premium content like ALilyBit article.
A marvelous article. So there is hope to being human after all? But, according to your article, you now have to prove that AI didn't write this, right? Just kidding. Your style is unmistakable and enjoyable.