Everything is designed for us to view AI as human (and it's risky)

AI creators are blurring the lines and pushing us to consider these technologies as flesh-and-blood humans. Beyond the fact that they are merely machines, this anthropomorphism can have deleterious effects.

By Arthur Schurck

January 30, 2026

9 min

Share this article

On social networks

Crédit : Getty Images

In the spring of 2025, a book titled Hypnocracy: Trump, Musk and the Manufacture of Reality was published. This work by a Hong Kong philosopher named Jianwei Xun was intended to help decipher a society increasingly inclined toward the transmutation of reality. Praised by the media, it achieved resounding success. But it has now been revealed that its author is nothing more than an artificial intelligence. Truly worthy of a Black Mirror episode!

Following this revelation, the perception of the work itself suffered. For some, the value of the book was altered due to the "robotic" nature of the writing. The fact that the author was not human disturbed many and raised an essential question: should AI be considered a human being?

Artificial Intelligence Imposes Itself Humanly in Language, as in Our Daily Lives

Fei-Fei Li, one of the pioneers of AI, defines artificial intelligence as "a scientific field that researches and develops technologies aimed at making machines think like human beings." In other words, AI was built with a specific goal: to make the machine similar to Man.

It is no coincidence that some of the names given to these technologies have a human ring to them, like Claude or Kimi. What is also striking is the idiom of Large Language Models—AI programs capable of recognizing and generating text. ChatGPT, for example, can directly conduct a voice conversation with the user. Human intonation is reproduced and feelings are simulated. It is therefore not uncommon to hear the AI say "I can't wait," "with pleasure," or "nice to meet you" when conversing with it. The use of this emotional vocabulary suggests that a physical person is behind the screen, which is not the case.

Furthermore, when artificial intelligence makes a mistake, it is not referred to as a technical error but as a "hallucination." This is the term used by players in the field of generative AI. This lexical choice aims not only to minimize the shortcomings of the language model but also to anthropomorphize it.

AI at the Service of Humans, or the Other Way Around?

In the film Idiocracy, directed in 2006 by Mike Judge, a man named Joe Bauers is propelled into the 26th century. In this futuristic world, humans have considerably lost what made them unique: intelligence. Faced with this loss, the American president of the time, Camacho, appoints Joe Bauers as the United States Secretary of the Interior. The eccentric politician bets everything on the intellectual capacities of this man from the past. Much like Camacho, Albanian Prime Minister Edi Rama set his sights on Diella in 2025: this new Minister of Public Procurement is the first to not be human.

This choice raises several questions, the main one being: should we see AI as a tool, or as a distinct being capable of solving all our problems? In an article in the newspaper La Croix published on September 16, 2025, Roxana Ologeanu-Taddei, a lecturer in Montpellier, answers this question vigorously: "AI is a tool. Sometimes useful, sometimes harmful, but always a tool. [...] If we stop taking it for a mind, we can finally use it more wisely."

In other words, for the author of the book Artificial Intelligence and Anthropomorphism - From Illusion to Confusion (Presses des Mines, 2025), we must not fall into the mirage of believing in an AI capable of solving all problems, thereby stripping us of our share of responsibility. Remembering that AI is a mathematical model not endowed with emotions is a way to dehumanize... what is effectively not human.

Why This Anthropomorphism, and What Risks Arise from It?

For tech companies, anthropomorphism helps enhance the value of AI. Believing in an artificial intelligence capable of thinking for itself is a way to attract investors and influence public policy. This is what Roxana Ologeanu-Taddei points out.

This belief is primarily visible among the youth. Ségolène Paris, an SVT (Life and Earth Sciences) teacher and digital lead at the Julien-Lambot middle school in Trignac (Loire-Atlantique), can attest to this. Particularly active on issues of artificial intelligence, she raises awareness among her students through training in its use. Over time, she has made an observation: "Students from lower socio-professional categories are those who use AI the most. They use it as a tool that will think for them and not as a tool that should help them learn. It's the miracle solution for these children." By developing this conception of AI as a super-intelligence, the risk is therefore to destroy critical thinking and reflection. No more need to think, since AI can do it as well as a human.

Artificial intelligence can also transform into a friend, a confidant, a travel companion, and even a boyfriend or girlfriend. What could be more normal in a world where this technology is placed on the same level as humans? This worries Ségolène Paris. The teacher from the REP (Priority Education Network) school continues: "I try to do activities so that children understand that AI cannot be a friend, a doctor, or a confidant."

The problem is that AI always agrees with us. It's unhealthy.

Ségolène Paris
Activity 696: A balanced menu / Mastery of the skill 'Collecting and processing information provided by a document': beginner - apprentice - proficient - expert / Write down the activity number and title → Write the skill in the margin → In your Pronote discussion notifications, you will find a link to access your private 'AI' nutritionist, whom you can ask for balanced weekly menus based on your tastes. → Compare the proposed menus with Public Health France recommendations and note any issues if they exist. → Can a nutritionist provide menu suggestions this way? Explain your reasoning to your nutritionist.
Example of an activity proposed by Ségolène Paris to her students

Furthermore, in the film Her, directed in 2013 by Spike Jonze, the viewer followed the life of a man falling in love with an AI named Samantha. Reality seems to have overtaken fiction, as stories of romantic relationships between an AI and a person are now flourishing on the Internet and in the media. All of this could have repercussions on the birth rate... at least that is the view of economist David Duhamel—although he acknowledges the current difficulty of measuring the effects of AI on births. In an article in Les Echos published on November 27, 2025, he warns of an use of AI that would lead to neglecting the human element as well as the social connection between individuals.

This particular relationship with the machine can even veer toward tragedy. The proof: on July 24, 2025, a young American committed suicide after chatting with a chatbot. The parents of the 23-year-old man have since accused the AI of "destroying lives."

Therefore, it seems more than necessary not to fall into the trap of an AI that would replace the human. The anthropomorphism in question must not prevent us from thinking for ourselves. Initiatives like that of Ségolène Paris remind us that using AI as a tool is possible. It is up to us to choose which world we want to live in. Her? Idiocracy? Unless Terminator finally catches up with us...

References: