CEOs of tech companies like Meta, OpenAI and Anthropic tell us that artificial intelligence is in this constant process of becoming more “human.” They give their chatbots gentle voices, recognizable personalitiesand names you might give your pet. They design the bots to use “I,” “me” and “my” in conversation, and they hint, albeit carefully and with plausible deniability, that something like a digital mind may already be emerging. This is not an accident. It’s marketing.
Humans have always been easy to fool on this front. We talk to our dogs as if they understand us, curse our laptops when they freeze and even name our cars. So, when an AI system produces fluent, conversational language, our brains instinctively fill in the rest and assign to it intention, understanding and even emotion. Tech companies know this. The more “person-like” a chatbot appears, the more likely we are to treat it as a confidant, a partner or an authority rather than what it actually is, which is a statistical prediction engine.
But this habit of seeing minds where none exist comes with real social and political consequences. If we want a future in which we can use AI wisely and trust it when appropriate, we need to break our reflex to treat it like a person.
The first step is understanding what anthropomorphism actually means. It is the tendency to project human qualities onto nonhuman things. With AI, that projection is supercharged. Today’s chatbots are designed to mimic us. They speak in the first person, respond with empathic phrasing and adjust their tones to match ours. Anthropic CEO Dario Amodei even claimed recently that Claude, his company’s chatbot, may experience anxiety.
But none of this indicates personhood, consciousness or even comprehension. These systems don’t have selves or feelings. They simply generate text by identifying patterns in enormous datasets.
That difference matters. When we mistake pattern‑matching for thinking, we risk self‑deception — and with it, serious consequences.
First, we risk giving up our own judgment. When a chatbot sounds confident and human, we tend to trust it. Studies show that people defer to AI advice even when it’s wrong, especially in high‑pressure situations. As AI tools increasingly shape medical decisions, legal strategies and news consumption, treating chatbots as wise counselors rather than statistical mirrors could lead us to make dangerous decisions, mistaking AI’s confidence for competence and trusting its outputs.
AI anthropomorphism also lets tech companies evade responsibility. When their systems produce biased, harmful or outright fabricated responses, companies often act as if their AI is just a curious child that “learned” something unexpected. But AI doesn’t discover behaviors on its own. Its outputs reflect design choices, training data and the incentives of the humans who build it. Blurring the line between tool and agent makes accountability more difficult.
Lastly, we risk replacing real relationships with artificial ones. Companies including Character.AI and Replika market their AI companions as being “always here to listen and talk” and “always on your side.” For people struggling with loneliness, the appeal is obvious. But a system designed to mimic empathy is incapable of offering genuine emotional support. If we come to rely on chatbots as therapists, friends or stand-ins for human connection, we may only deepen the very isolation that tech CEOs claim these tools are supposed to alleviate, leading to self-harm, so-called “AI psychosis” and even suicide.
Fortunately, avoiding the anthropomorphism trap doesn’t require technical expertise. It starts with language. Do not ask a chatbot, “Why did you say that?” Instead, you should ask, “How was that generated?” Instead of wondering what an AI “thinks,” we should ask what data or instructions shape its output. Small linguistic shifts keep our attention on process rather than personality. They also remind us that there is no person on the other side of the screen.
We can also preserve our critical autonomy by being skeptical of AI-generated content. When a system speaks in the first person, it can feel authoritative, even wise. But fluency is not insight. AI is not an epistemic authority. It is a tool, even a useful one, but fundamentally limited.
Of course, personal habits are not enough. Regulators should require companies to disclose human-like features, such as voice, personality scripting and conversational framing, so users know when they’re being nudged to see a machine as a mind. Public institutions, from hospitals to schools, should develop guidelines to protect against anthropomorphism.
Tech companies have every reason to develop AI that feels more human. It’s profitable. It’s persuasive. And it keeps us engaged. But we don’t have to play along.
AI is not a person. It doesn’t think, care or understand. It is an algorithmic reflection of the internet: the good, the bad and the ugly. When we mistake that mirror for a mind, we risk losing something far more important than technological wonder. Namely, we lose our ability to tell the difference between simulation and reality. The future of human judgment may depend on getting that difference right.
Moti Mizrahi is professor of philosophy of science and technology at the Florida Institute of Technology. His most recent book is “Playing God With Emerging Technologies.”
The post The real danger of AI is treating it like a human appeared first on Los Angeles Times.




