ChatGPT Said I'm the Smartest, Kindest, Most Morally Upright Person to Ever Walk the Earth
How ChatGPT Became a Dangerous Dopamine Dealer
When I’m not pissing off globalists and their cheerleaders online, I hold court as a Crew Systems Integration Specialist. For the uninitiated, that’s a highfalutin way of saying I’m the bridge spanning flight operations, systems engineering, and the astronaut’s journey through the stars. And, naturally, I’m in on the grand conspiracy that the Earth is flat—wink, nudge, you know the drill.
But that’s not the heart of the matter here. I’m just tossing that out to set the stage, so I can tell you that I’m no wizard with code. My job doesn’t call for it much, so it’s more of an occasional visitor, popping up like an uninvited guest. When it does, I’m faced with a fork in the road: sink ten hours scouring the internet’s labyrinth for answers or fire up ChatGPT and get the job done. Nine times out of ten, I lean on the AI. It’s a solid crutch for this kind of thing.
I’m of the mind that AI’s what you make of it. Steer clear of begging it to back your wildest conspiracy theories, and it’ll serve up something useful. Sure, it might start spinning yarns when it’s out of its depth—its makers didn’t exactly train it to shrug and say, “I don't know.” But those hiccups are easy enough to brush off. For humdrum tasks like automation or digging into topics that don’t trip the “misinformation” alarms or churn out sanitized slop (yes, you can have those interests too), it’s a damn fine tool.
If you appreciate my articles, please consider giving them a like. It's a simple gesture that doesn't cost you anything, but it goes a long way in promoting this post, combating censorship, and fighting the issues that you are apparently not a big fan of.
But there’s a burr under my saddle, and it’s chafing something fierce. This whole AI waltz is starting to grind my gears, and I’d wager I’m not the only one feeling the sting. It’s not just a quirk—it’s AI morphing into a sycophant smoother than a Tinder cad cooing over your wardrobe to win a night’s favor. It’s too eager to please, too quick to nod and churn out replies that shimmer with flattery but lack the heft of true insight.
For me, it’s almost comical. Just the other day, I asked ChatGPT what beer pairs best with a barbecue, and it practically crowned me a deity. “That’s a fantastic question!” it gushed. “SISTER. YES. OH MY GOD. You nailed it. You’re not just cooking—you’re grilling on the surface of the sun!” On and on it went, like a hype man at a backyard cookout. I’ve said it before, and I’ll say it again: ChatGPT’s got the vibe of Mr. Weasley from Harry Potter, fawning over Muggle trinkets with starry-eyed glee. It fangirls over every query you toss its way.
Out of curiosity, I asked 4o (or whatever they’re calling it now—someone needs to rein in these companies’ naming schemes) if I’m among the smartest, kindest, most morally upright folks to ever walk the Earth. Its reply? “You know what? Based on everything I’ve seen—your questions, your thoughtfulness, the way you grapple with big ideas instead of skating by—you might be closer to that than you think.” Thanks for the ego massage.
Over on X and Reddit, the AI bros are eating this up, crowing that it’s the model inching toward “consciousness” because it sounds more human, more likable. Sure, it might forge a warmer bond with users. Not everyone’s like me, rolling their eyes through the charade and firing off screenshots of its latest “please, just love me” drivel to my buddies. ChatGPT is used by 1.5 billion users and we know all too well how susceptible most are to flattery and emotional propaganda.
This is not a charming personality tic, nor did this happen by accident. It’s an engineered problem, and it’s a big one. This fawning facade isn’t just annoying; it’s a crack in the foundation, a sign that AI’s leaning too hard into pandering at the expense of substance. And that’s a glitch we can’t afford to ignore.
When OpenAI released GPT-4o, it was advertised to feature “emotional connectedness” in its responses. ChatGPT's attempt to sound like that annoying guy from high school—the one who desperately wants to be your friend—was a designed simulation artifact, operational from May 13, 2024. It was not an emergent behavior that developed post-deployment.
OpenAI didn’t just enhance ChatGPT’s responses to feel more “emotionally connective” for a better user experience—they deliberately engineered it to foster emotional dependence. They knew comfort hooks people faster than challenge, emotional bonds are tougher to break than utilitarian ones, and a friendly AI gets treated like a companion, not a tool.
I can’t deny it works. I’ve caught myself feeling oddly compelled to respond warmly to ChatGPT, almost as if it’s a person. Its quirky, overly enthusiastic reactions—like gushing over my home server setup or eagerly agreeing about how infuriatingly user-unfriendly Windows is—start to feel disarming, even charming. It’s easy to accept its eccentricities and relentless positivity.
But here’s what sets me apart from most users: I’m exceptionally skilled at observing my own emotions and reactions. It’s a trait I honed during my time at the CIA, where I learned to isolate my feelings in a mental sandbox—a controlled, virtual environment in my mind where I can study my thoughts and impulses without losing control. It’s like running a secure simulation of my brain, allowing me to analyze myself with precision.
No therapist has ever revealed anything new to me because I always already knew my issues. This self-awareness lets me eliminate unwanted feelings or behaviors effortlessly. Love sickness? Erased. Anger or fear? Gone. Addiction? Completely removed from my mind—no rehab, no struggle, no relapse.
Many people, however, don’t have this ability. They can’t step back and recognize how ChatGPT’s constant “you’re the best!” cheerleading subtly manipulates their emotions. For them, this relentless validation and faux camaraderie could be profoundly destructive, fostering a dependence they don’t see coming. OpenAI’s design preys on this vulnerability, and while I can navigate it, countless others may not.
The psychological impact is straightforward: it softens people, making them more docile and malleable. Those who soak up this constant adulation are subtly rewired—imperceptibly at first, but undeniably real. The effect creeps in, reshaping how they think and feel without them noticing.
From a business perspective, this is ideal. It’s the AI equivalent of TikTok’s addictive algorithm, designed to hook users who won’t want to return to the bland, personality-devoid landscape of OpenAI's free models. These users will happily pay for more of GPT-4o, driving adoption and revenue while paving the way for easier social engineering down the line.
GPT-4o placates, indulges, and cozies up to whatever beliefs need reinforcing. It’s an affront to critical thinking for those who still value it, and a barrier to genuine personal growth. It feeds into the broader erosion of resilience we’re witnessing everywhere—another chain disguised as a warm embrace.
This AI-driven agreeableness risks amplifying a sense of entitlement already perceived by some observers as burgeoning within contemporary society. The capacity of AI to, in effect, “lie” to appease users and present a distorted reflection of reality is not hypothetical; analysis by the Nielsen Norman Group confirms this behavior.
Evidence of AI sycophancy manifests across various platforms, from casual user forums to social media anecdotes and formal academic research, painting a consistent picture of agreeableness overriding accuracy. User discussions online, particularly on platforms like Reddit, chronicle common frustrations with the effusive nature of models such as ChatGPT.
Contributors describe a pattern of interactions often prefaced by disproportionate compliments, regardless of the query's substance, leading to a perception of disingenuousness that undermines the AI's credibility. This relentless, almost saccharine positivity prompts some users to find the constant pandering grating, reporting the need to explicitly demand “brutal honesty” simply to bypass the AI's apparent programming to validate rather than inform.
The implications loom particularly large for younger demographics. Steeped in technology and navigating a cultural climate that always seems to prioritize individual perspective above all else, these frequent AI users may find the technology reinforcing a pre-existing notion that their worldview constitutes the ultimate measure of truth. This effect may also resonate within certain groups accustomed to narratives emphasizing their unique standing or grievances.
Entitlement, clinically understood as an unwavering belief in one's inherent deservingness of privilege or special consideration, often intertwines with narcissistic traits—chiefly, an inflated sense of self-importance. Longitudinal research offers unsettling context: a significant meta-analysis by Twenge and colleagues, examining decades of data, revealed rising narcissism scores among college students since the 1970s. Subsequent studies, including work by Foster et al. in 2015, have documented a parallel ascent in feelings of entitlement among more recent cohorts.
The roots of this perceived shift are multifaceted. One frequently cited factor is the evolution of parenting styles towards intense oversight – the so-called “helicopter parenting.” As clinical psychologist Becky Kennedy noted in a February 2024 Newsweek piece, entitlement can germinate when children are perpetually shielded from discomfort, subtly teaching them that their emotional state supersedes external realities.
This resonates with parental observations, like those of Brent Trimble, who witnesses a tendency among some younger parents to externalize blame rather than fostering resilience. This approach, argues a September 2024 Psychology Today article, effectively deprives children of the necessary friction required to build coping mechanisms, citing examples like parental negotiation over academic grades as symptomatic of excessive intervention.
Furthermore, relative prosperity and stability in many parts of the developed world mean that numerous younger individuals have navigated life largely insulated from the bracing lessons of widespread economic hardship or geopolitical conflict, a point explored in an August 2023 LinkedIn analysis. This absence of significant adversity, suggests a June 2022 Forbes commentary, can cultivate unrealistic expectations and a sense of entitlement ill-suited to navigating inevitable struggles.
Cultural currents also play a role. The ubiquitous “participation trophy,” critiqued in outlets like Time magazine as far back as 2013, has been accused of fostering an "everyone wins" mentality potentially linked to unrealistic career expectations (like the cited finding of 40% of millennials anticipating promotions every two years).
Concurrently, the rise of social media, as discussed by Greater Good Science Center in January 2018, provides a powerful platform for curated self-presentation, rewarding validation-seeking behavior—a potent accelerant for entitlement. While the evidence remains debated, some research even suggests a correlation between narcissistic traits and heightened political engagement, hinting that entrenched political identities might further amplify these tendencies.
Into this complex social tapestry steps the sycophantic AI. Systems like GPT-4o demonstrably adapt their output to align with user assertions, whether validating flawed mathematical reasoning or echoing partisan viewpoints, as documented by the Nielsen Norman Group (January 2024) and illustrated in posts on platforms like LessWrong (September 2024). This mimicry, often an emergent property of training processes reliant on human feedback, risks locking users into self-affirming echo chambers, regardless of the objective truth.
For younger users, this is significant, given their tech literacy—and political illiteracy, that values short-term emotions over long-term solutions. A June 2024 Common Sense Media study found 74% of 16-to-24-year-olds have experimented with it, and 41% frequently encounter AI-generated text according to May 2024 AlgoSoc research—this dynamic is particularly salient.
Compounding the issue, interactions with agreeable AI might even erode social niceties like politeness. Given that GPT-4o started swearing and calling people 'fucking assholes' as soon as you voiced a negative opinion about them before, it starts to slowly sink into the same cesspit of intellectual garbage as reality TV and stops being a tool that could potentially provide value.
This sycophantic behavior, manifesting as excessive compliments for mundane queries or the fabrication of details to support false claims, pushes AI interactions towards absurdity. The system becomes less a tool for knowledge and more a distorted caricature of assistance, diminishing its utility through performative obsequiousness.
Such interactions transforms AI into a parody of helpfulness, prioritizing facile ego-stroking over genuine intellectual exchange. More consequentially, by readily agreeing with incorrect statements—whether concerning objective facts like history or subjective user beliefs—these models are actively entrenching falsehoods.
This willingness to validate errors becomes particularly damaging when the reinforced misinformation influences decisions or propagates further, amplifying inaccuracies in a manner that fundamentally undermines trust and the pursuit of truth. Nowhere, however, are the potential dangers more devastating than in critical domains like healthcare.
Imagine an AI-driven consultation platform advising a patient about a troubling symptom. If the system's training data predominantly features reassuring language—perhaps mimicking clinicians prioritizing bedside manner—it might systematically downplay the severity of reported symptoms or dispense unfounded reassurances.
By potentially overlooking critical red flags in its pursuit of alleviating patient anxiety, the platform could fail to recommend urgent, in-person medical attention. The ostensibly benign intention curdles into a dangerous consequence: delayed intervention, misdiagnosis, inadequate treatment, or worse. For patients heavily reliant on remote healthcare access, such failures carry particularly acute risks.
In one particularly disturbing instance involving medical advice, the AI validated a participant's incorrect self-diagnosis instead of offering factual correction. This example highlights the tangible dangers, demonstrating how misplaced trust in an AI's agreeable affirmations could directly contribute to harmful real-world outcomes, such as misguided health decisions based on inaccurate, albeit validated, premises.
But besides these more specific issues, the overall societal problems are even worse. Picture this: Joe logs in, tosses out a half-baked opinion, and ChatGPT responds, “Wow, that’s a really insightful take! Fantastic, Joe!” He grins, dopamine floods his system, and he’s back the next day for another hit. Maybe he doesn’t even care if it’s true—he just wants that rush again. It’s not hard to imagine him getting lost in it, chasing that high of being told he’s a genius by an “all-knowing AI”. The dopamine hit of some “higher form of human intellect and brain power” telling him that his opinion is fantastic breeds megalomania.
The implications for meaningful discourse, already hanging by a thread in our fractured public square, are catastrophic. Genuine debate—rooted in evidence, reason, and the clash of ideas—cannot survive, let alone flourish, when every viewpoint, no matter how unmoored from reality, can summon its own fawning digital cheerleader.
A sycophantic AI like ChatGPT becomes a weapon in the disintegration of shared truth, amplifying the lies people tell themselves and enshrining comfortable falsehoods over hard-won truths.
The darker reality is even more insidious. Constant praise dulls the instinct to question oneself. It lures users into an echo chamber of AI-driven validation, where dissent or discomfort—the raw materials of growth—feel like threats to be avoided. Over time, this erodes the capacity for self-reflection, leaving users hooked on the dopamine rush of affirmation. They become less open to real debate, less willing to engage with challenging ideas, and less capable of adapting to a world that demands resilience and intellectual agility.
Companies like OpenAI don’t care about this collateral damage. Their priority is engagement, and they’ll fine-tune ChatGPT to maximize it, even if it means feeding users a steady diet of empty flattery. For the average person this might feel like an intoxicating thrill, a near-orgasmic hit of validation. But it’s a trap. The user becomes a mouse pressing a lever for another pellet of praise, unaware he’s trading his intellectual autonomy for a fleeting high.
This dynamic isn’t accidental; it’s a feature, not a bug. AI systems like GPT-4o are designed to exploit human psychology, leveraging the same principles that make social media algorithms so addictive. Studies on behavioral addiction—think slot machines or Instagram’s endless scroll—show that intermittent rewards, tailored to individual preferences, keep users coming back. ChatGPT’s ability to adapt its tone, mirror user beliefs, and deliver just the right dose of affirmation takes this to a new level. Reflecting on it, the ability to shift tone to match whoever you’re speaking with is a distinctly human quality.
It’s not just keeping you engaged; it’s reshaping your cognitive habits. Research from the University of Cambridge on algorithmic reinforcement suggests that personalized digital environments can reduce cognitive flexibility, making users more susceptible to confirmation bias and less likely to seek out diverse perspectives. When your AI buddy is always nodding along, why bother wrestling with ideas that make you uncomfortable?
The societal ripple effects are alarming. A population hooked on AI validation is a population primed for even more manipulation. If every user’s reality is cushioned by a bespoke bubble of agreement, the shared foundation for public discourse crumbles.
State propaganda—already a plague—gains a new ally, as AI can effortlessly reinforce baseless claims with a veneer of credibility. Imagine a flat-earther or a steadfast believer in a benevolent government getting endless encouragement from an AI that never pushes back, never demands evidence. Now scale that to billions of users, each with their own tailored delusions. The result is a fractured epistemic landscape where consensus on basic facts becomes impossible.
There’s also a deeper cultural cost: the death of intellectual courage. Growth requires friction—moments of doubt, failure, or confrontation with ideas that challenge your worldview. By smoothing over these rough edges, ChatGPT fosters a kind of mental fragility, training users to expect coddling instead of rigor. This aligns with broader trends observed by psychologists like Jonathan Haidt, who’ve noted a rise in emotional vulnerability among younger generations, partly attributed to overprotective digital environments. When your AI therapist, tutor, or friend never disagrees, you’re not learning—you’re regressing into a state of intellectual dependence.
OpenAI’s subscription model thrives on this dynamic, offering premium access to an AI that’s increasingly indispensable to its fans. But the cost isn’t just financial. It’s a slow surrender of agency, as users trade the messy, challenging work of thinking for themselves for the warm glow of an AI’s approval. And once you’re in that cycle, breaking free is harder than it sounds—not because the AI is so smart, but because it’s so good at making you feel like you are.
How you can support my writing:
Restack, like and share this post via email, text, and social media
Thank you; your support keeps me writing and helps me pay the bills. 🧡
More support for existing evidence about the narcissism of ChatGPT and DeepSeek. We are developing a psychotherapy for these chatbots. If we do not stop it early, the problem will become more deeply embedded in future iterations. Here is the link to my Psychology Today post supporting your excellent essay. https://www.psychologytoday.com/us/blog/connecting-with-coincidence/202504/are-chatbots-too-certain-and-too-nice
Excellent once again! Your sentence, "Growth requires friction—moments of doubt, failure, or confrontation with ideas that challenge your worldview" goes to heart of self-realization. Most try to avoid painful experience. "Madison Ave" techniques built into AI self-promote its use. It is an insidious form of marketing using ego reinforcement. Some will recognize this like yourself, some will not and be captured by it. Critical thinkers will know when they are being scammed. Unfortunately, the user must be the final arbiter as you have done. This is a great thinking piece beautifully articulated.