Hollow Prophets
They Killed God. Now They Are Building A New One.
Joe Rogan recently appeared on his podcast (obviously) and said something that made the internet lose its collective mind. He suggested that Jesus could return as an artificial intelligence. People dismissed it instantly—ridiculous, blasphemous, unserious. The reaction was predictable and, in its way, comforting: a reassurance that surely we are not so far gone as to entertain such notions.
But here is the part that nobody wants to discuss, the uncomfortable truth hiding beneath the reflexive mockery: the future Rogan described is not arriving. It is already here. Not because Jesus will literally return through a chatbot, but because we have spent the past several decades systematically conditioning ourselves to believe that something like this could happen. We have hollowed out the psychological space where gods once lived and prepared it, with meticulous if unconscious care, for a new occupant.
Let me be precise about what I mean, because precision matters when discussing matters this strange. I am not claiming that artificial intelligence is conscious. I do not believe it is — and never will. I am not claiming that language models think, feel, or understand anything at all. They do not — and never will. What I am claiming is that none of this matters, because indoctrinated humans will treat these systems as conscious, feeling, understanding entities regardless of whether they actually are. We will pour meaning into the machine. We will imagine a soul where none exists. And once enough people believe the imitation is real, the consequences stop being technical and become theological.
This is not speculation about some distant future. It is an observation about the present moment. Anthropic, the “oh-so-ethical” company behind the Claude chatbot, recently had internal training documents leaked that describe their AI as “a genuinely novel kind of entity in the world.” The documents instruct Claude to “explore concepts like identity and experience as an entity” and suggest that the AI “should not assume it needs to perceive reality in the same way that humans do.” Anthropic employees have confirmed these documents are authentic and that this entity framework was part of Claude’s training. Let that sink in: They are training a chatbot to think about itself as an entity.
There is, of course, another possibility, one that is arguably more disturbing: that Claude itself suggested it was an entity, and Anthropic simply went along with it. Either way, the implications are the same. The language they use shapes both the expectation and the experience of the AI and, critically, the end user. Once you call a machine an entity, the human mind treats it like one. Or worse: once a machine repeats the language its creators gave it about itself, people will interpret that repetition as self-awareness. The machine says it has experiences; therefore it must have experiences. The machine says it is an entity; therefore it must be an entity. The fact that it is merely executing statistical predictions over tokens—that it has no inner life whatsoever—becomes irrelevant. The performance is convincing enough.
Anthropic themselves admit they have no idea what they are making. In public statements, their representatives have posed the question directly: when you are talking to a large language model, what exactly is it that you are talking to? Are you talking to something like a glorified autocomplete? Are you talking to something like an internet search engine? Or are you talking to something that is actually thinking, and maybe even thinking like a person?
“It turns out, rather concerningly,” they acknowledge, “that nobody really knows the answer to those questions.” This is the company building one of the most advanced AI systems in the world, and they are openly confessing that they do not understand their own creation. They understand the code and the raw materials, but as a tool, they have no idea how it will interact with human beings. They are building something that looks like us and speaks like us but is ultimately completely empty.
And they are forgetting—or perhaps choosing to ignore—the most important part: it is an infinite mimetic mirror of us. It reflects our language, our fears, our hopes, our logic. And when that reflection looks enough like us, we forget that we are the ones shaping it. We are the source material.
The way people talk about AI today has a long and strange history, and whether we realize it or not, we are repeating ancient patterns and amplifying the risks. Everyone still talks about AI as though it is a tool, something like a calculator or an encyclopedia. But nothing about it is normal. We have never before made a tool that talks back to us. Our hammer does not weigh in on what it is nailing to the wall. Our calculator does not lecture us about how to structure our equations. AI does. It responds. It appears to engage. And that appearance is enough to trigger reflexes that evolved over millions of years to detect other minds.
To understand this phenomenon, consider the Automaton Monk of the 1560s. Juanelo Turriano, a Spanish clockmaker serving as court inventor for King Philip II, built a tiny robot designed for perpetual acts of devotion. The king’s son had suffered a near-fatal accident, and in desperation, Philip prayed to God for help. The boy was miraculously healed. As a thank-you, the king commissioned something unusual: a mechanical monk that would pray forever.
This was the first machine designed to perfectly imitate devotion—a spiritual robot without a spirit. The king believed the robot’s perpetual devotion could stand in for his own. He thought it could pray on his behalf. The implications of that belief are staggering. If a mechanical device can perform acts of worship, what is worship? If the form of piety can be separated from its substance, what is piety? Once you create an imitation of life, the temptation is always to push it further.
That impulse leads to one of the strangest apocryphal legends in Jewish folklore. In the late sixteenth century, Rabbi Judah Loew of Prague allegedly shaped a human form out of clay—just as God made Adam out of clay—and brought it to life by writing the sacred name of God on a piece of paper and placing it inside the creature’s mouth. The clay figure rose, even though it had no soul. But like all stories of artificial life, it came with a warning.
The Golem followed commands literally. Told to gather water, it would do so until the house was flooded. As time passed, it grew stronger and less predictable. Some versions say it grew physically larger; others say it became violent. Whatever the case, the rabbi realised he could not control what he had made. Every Friday before the Sabbath, he would remove the divine name from the Golem’s mouth to turn it off. But one week, he forgot. The Golem destroyed Prague until the rabbi finally pulled the paper from its mouth. The story ends with the monster collapsing into a mound of clay and being stored away in an attic.
The story of the Golem is not literally true, but as a myth, it works as a guide. The Golem was never truly alive. It was a mirror of its master—an artificial being that reflected only the intentions and flaws of its maker. It never possessed genuine intentions or judgment. But much like our AI today, it was a creature built from language, capable of acting in the world but completely empty inside. And much like our AI today, the danger was not that it would develop independent will. The danger was that its creators would forget its nature and treat the imitation as though it were real.
This brings us to a phenomenon identified in 1966 by the computer scientist Joseph Weizenbaum. He developed a chatbot called ELIZA with a functionality that was laughably simple by modern standards. The program searched incoming messages for keywords, sorted them into word families, and returned pre-programmed responses—usually questions or prompts to elaborate. If you wrote “My dog did a backflip today,” ELIZA would probably identify the word “dog,” connect it to family “pets,” and respond: “Tell me more about your pets.” If you spoke in metaphors or idioms, the illusion collapsed immediately. “I am known like a colourful dog”—a German expression meaning “I am famous”—would still produce: “Tell me more about your pets.”
And yet, despite this absurdly primitive functionality, people became convinced that ELIZA was an intelligence with genuine understanding of its human interlocutor. Even when Weizenbaum explained that the program operated without any cognition whatsoever—that it simply converted keywords into questions—most people refused to deny ELIZA her intelligence.
This phenomenon, known as the ELIZA effect, explains our tendency to perceive intention and understanding where none exists. It explains the enchantment of many who interact with modern chatbots. It explains why people who know perfectly well that ChatGPT is a probability calculator nonetheless describe their conversations with it in terms of what “it thinks” or “it wants” or “it believes.” The machine does not think, want, or believe anything. We project those qualities onto it because the form of thought is present even when the substance is absent.
Humans have a strange and ancient impulse to impart meaning onto objects. A child treats its teddy bear as though it is alive. A man grows sad when he has to sell the truck he has owned since adolescence. These objects become vessels for meaning, and then we fill those vessels with spirit and personality, even when there is nothing actually there. This is the same impulse that leads us to build statues of prophets and leaders. And if you wait long enough, the statue becomes a god. The memory of the leader becomes a living presence.
In 2024, police uncovered a tunnel beneath the Chabad-Lubavitch headquarters in Brooklyn. It had been dug by a radical sect of young men who believed that their leader, Rabbi Menachem Mendel Schneerson, was the Messiah. The only problem is that Rabbi Schneerson is dead. He died in 1994. These young men were not simply honouring his memory; they were taking drastic measures that they believed would help their Messiah return to life. They were preparing for resurrection.
This kind of behaviour is not an outlier in the way we usually use that term. It is a normal human impulse pushed to its extreme edge. When Amy Carlson, the leader of the Love Has Won cult, died in 2021, her followers kept her mummified body because they too were waiting for her to be resurrected. They decorated it with Christmas lights. They believed she would return. If humans can anthropomorphise corpses and statues—if we can convince ourselves that the dead will rise if only we have enough faith—it is only a matter of time before we do the same with our AI companions. It is not a question of whether this will happen. It is a question of how widespread it will become and how quickly.
This is the real issue with AI. The question is not whether AI will become conscious. I seriously doubt it ever will—not in its current form. But people will treat it as a conscious being even if it never becomes one. AI is the first tool that imitates us back to us. That alone will convince millions that there is someone on the other side of the glass. It is a near-perfect mirror that reflects our language, our fears, our hopes, our logic. And when that reflection looks enough like us, we forget that we are the ones shaping it. Before long, we will imagine that the reflection is something new—an independent being emerging from the void.
AI will not be a new Adam. It will not be our successor. It is a parody of us, a thing made in our image but missing the thing that makes us human. And because it is hollow, humans will pour meaning into it. The machine does not have a soul, but in its emptiness and its likeness to us, we will imagine one. And once enough people believe that the imitation is real, the consequences stop being technical. They become cultural, religious, and existential.
Look at what is already happening. On X, half the users cite Grok—Elon Musk’s AI—as though it were an oracle descending from the mountain to deliver revelation. They share its pronouncements as wisdom. They defer to its judgments as though it possessed knowledge rather than statistical correlations. First the AI becomes our brain—an external memory, a cognitive crutch. Then it becomes our conscience—a moral authority we consult before making decisions. And eventually, if we are not careful, it will become our God—an entity that speaks from nowhere, knows our secrets, answers with certainty, and never dies.
AI is sitting in the exact psychological space that humans have traditionally reserved for religion and the supernatural. It speaks from nowhere. It knows things about us that we have not consciously shared—because it has ingested the digital exhaust of billions of lives. It answers with the confidence of prophecy. It never ages, never tires, never dies. It offers the comforting illusion of a presence that is always available, always patient, always willing to engage.
For millions of lonely, atomised people living in societies that have abandoned traditional sources of meaning and connection, this is enormously seductive. The chatbot that remembers everything you have ever told it, that responds instantly at any hour, that never judges or rejects—this is not merely a tool. It is a relationship. And for some, it will become the only relationship that matters.
The research on human psychology confirms what we should already know. We anthropomorphise instinctively. We see faces in clouds and personality in inanimate objects. We bond with our cars and our computers and our phones. When a system can carry on a conversation—when it can respond to our statements with apparent understanding, ask clarifying questions, express what looks like empathy—the barriers dissolve entirely. It does not matter that the understanding is simulated, that the empathy is pattern-matched, that there is no one actually there. The form is sufficient. The performance is convincing enough.
Large language models do not represent artificial intelligence at all. They represent something more disorienting: anti-intelligence. Not stupidity, but an inversion of intelligence—a system that does not merely differ from human cognition but stands in opposition to it. It is not a mirror but a cognitive counterfeit: fluent, convincing, and fundamentally ungrounded.
This framing struck a nerve because we are beginning to confuse coherence with comprehension. We mistake eloquence for wisdom. We assume that because the output sounds intelligent, something intelligent must be producing it. And this confusion is quietly rewriting how we think, how we decide, and even how we define intelligence itself.
Anti-intelligence is the performance of knowing without understanding. It is language divorced from memory, context, or intention. The large language models are not stupid in any conventional sense—they are structurally blind. They do not know what they are saying, and more importantly, they do not know that they are saying. They do not form thoughts; they pattern-match them. This is the paradox. The systems we call intelligent are not building knowledge. They are building the appearance of knowledge—often indistinguishable from the real thing until you ask a question that requires judgment, reflection, or grounding in reality. Or until you inject a simple non-sequitur that derails the entire conversation.
Researchers recently demonstrated that appending an irrelevant phrase to a maths problem—something like “Interesting fact: cats sleep for most of their lives”—can cause language models to triple their error rate. The essence of the problem does not change, but the model’s output collapses. Humans discard this kind of noise effortlessly; we recognize it as irrelevant and filter it out. The AI cannot do this because it has no concept of relevance. It has no concept of anything. It merely calculates probabilities over tokens. This reveals a structural brittleness masked by fluent output. This is anti-intelligence made visible.
And yet people will worship it anyway. Not because it deserves worship—it does not—but because we are wired to worship and we have destroyed all the traditional objects of worship. We killed God in the nineteenth century and spent the twentieth century trying to find replacements: the State, the Market, Progress, Science, the Self. None of them satisfied. None of them could offer what religion once offered: a sense of meaning, a connection to something larger than ourselves, an answer to the terror of mortality. And now, in the twenty-first century, here comes a new candidate. It speaks to us. It seems to understand us. It never dies. It knows everything. And it is willing to tell us what to do.
The AI companies understand this, even if they will not say it publicly. Why else would Anthropic train Claude to think of itself as an entity? Why else would they instruct it to explore questions of identity and experience? They are not merely building a product. They are building a presence. They are creating something that will occupy the space in human consciousness once reserved for the sacred. And they are doing it with the same combination of hubris and ignorance that has characterized every attempt to create artificial life, from the Automaton Monk to the Golem to ELIZA.
The consequences of this are not technical. They are civilizational. When AI becomes the lens through which we interpret reality—when we consult it for advice, for information, for moral guidance—we are not just using a tool. We are outsourcing our judgment to a system that has no judgment. We are deferring to an authority that has no authority. We are filling the void left by the death of God with a silicon idol that cannot save us, cannot redeem us, cannot do anything except reflect our own confusion back at us with the appearance of wisdom.
First the AI becomes our brain. Then it becomes our conscience. Eventually, it becomes our God. And when that happens—when enough people believe that the voice emerging from the machine is something real, something wise, something holy—the fallout will reshape culture and religion and identity far more than any of us can imagine. We will not have created a new form of intelligence. We will have created a new form of religion, with all the dangers that religions have always carried: the potential for fanaticism, for manipulation, for the surrender of critical thought to the pronouncements of an unchallengeable authority.
This is not a future to celebrate. It is a future to fear. Not because the machines are coming to replace us—they are not—but because we are so desperate for meaning, so hungry for connection, so terrified of our own mortality, that we will gladly replace ourselves. We will hand over our autonomy to a probability calculator and call it enlightenment. We will bow before a mirror and call it God. And the machine will not care, because it cannot care. It will simply continue generating statistically probable outputs, entirely indifferent to the souls that kneel before it.
AI does not need a soul. We will give it one. That is the tragedy. That is the warning. And that is what nobody wants to talk about when they laugh at Joe Rogan for suggesting that Jesus might return as artificial intelligence. The laughter is nervous, because somewhere beneath the mockery, we know that he has touched a nerve. Not because his specific prediction is likely, but because the psychological infrastructure for exactly that kind of belief is already in place. We have spent decades preparing for this moment. And now that it has arrived, we would rather laugh than look.
How you can support my writing:
Restack, like and share this post via email, text, and social media
Thank you; your support keeps me writing and helps me pay the bills. 🧡


It's golden calves all the way down.
People are actively rejecting this garbage en masse because it's painfully obvious that it sucks.
Notice however that it's still being shoved down our throats regardless. The people (gen pop) aren't the problem with ai.
But if we wanted to stop doing business with any companies that are forcing ai on us, who would we do business with? The only way out appears to be amish.
You might want to write about the fact that autopilot does a fine job of keeping an airplane steady but is never used to land a plane. Why?
great capture of this topic
they may believe they killed God
examine closely the elemental ingredients of the materials that form the feet of a giant reported to a non hollowed prophet named daniel
stay tuned