Login

×

Against AI’s Use of “I”

A statement from Fundación Escritura(s)

 

 

In the context of a series of actions our Foundation will develop throughout 2026—taking a stance in the debate on the implications of AI for life and creation—we begin by taking a position on the de‑anthropomorphization of technology with this statement:

Large language models (LLMs) or “artificial intelligence” devices are today being used without restraint by a multitude of people all over the world, across every social stratum and professional sphere. The phenomenon comes with a long list of problems and dilemmas that have been identified and that should be discussed by everyone, because they affect us all. In particular, the use of the first person by AI devices.

 

In a quick enumeration of some of the aspects pointed out by many critical observers—including many employees of the phenomenon’s governing elite—one would have to cite:

• Biases of all kinds in responses to prompts and queries.

• The general lack of awareness of what kind of relationship to the world the use of these devices seeks, devices presented to us as extraordinarily powerful agents of progress.

• Hallucinations and inaccuracies in answers that drive faulty decisions affecting lives.

• The overwhelming dominance of English-language data and sources on which answers are fed, further expanding the ethnocentrism characteristic of U.S. cultural imperialism, only barely accompanied by the emergence of some Chinese options, still far from having freed themselves from the patterns set by the creation of the models developed by companies of their technological and commercial rival.

• Corruption in the self-learning scale-up of these devices. Lack of rigor is contagious, multiplying like an epidemic that would spread through the knowledge repository on which these devices feed.

• The relationship of psychological dependence that the most lonely and unprotected sector of the population is led into, encouraged by misguided use in a general context of bewilderment.

• The atrophy of many capacities related to reasoning and to the expression of ideas itself in the youngest sector of the population, a consequence of the paralysis of educational institutions—now even more questioned and disabled by unrestricted, unconscious, routine and fraudulent use both by much of the student body and by teaching staff—as if the frontal attack these institutions currently suffer from the most ultra-conservative and anti-intellectual pressure groups were not enough.

• The usurpation of ever more work functions, rapidly threatening entire professions (translators, illustrators, teachers…), without offering repair or a way out of the conflict, beyond the irony of proposing that they retrain as experts in operating AI. Writers, filmmakers, musicians—there is hardly a profession left in the artistic sphere that does not feel the breath of its “artificial” pursuer on the back of its neck.

• Etc.!

 

To provide context, all of this is happening in a race as chaotic as it is immoderate. It is being driven by companies that use their users as guinea pigs and as providers of information for businesses that have no shame in portraying themselves as fabulous in scale, with literally trillion‑dollar investments in massive data centers, ready to consume as much energy in their calculations as would be needed to sustain the planet’s entire current societies—already cornered by the climate emergency. All of this without prior debate beyond the most specialized circuits, without the slightest trace of democratic discussion and decision despite the overwhelming economic, social and political implications that how these devices function—and how they might help us progress, and what limits should be placed on their maintenance, let alone on their risks—have for all human societies today.

 

To a large extent, all of this also happens at the mercy of the “enlightened” visions of a handful of technophile billionaires and their teams of technologist engineers. Without the least shame, like Wild West elixir salesmen, these immensely powerful entrepreneurs—gurus with unmistakably megalomaniac tendencies and, far too often, supremacist convictions associated with concepts of race or nationality—promise the resolution of all ills. It is evident that the leaders of the business of the century seek not the good of humanity so much as the conquest of the greatest possible power, with the enthusiastic consent of the great masses of circulating money. And if everything were to go wrong, some would already be planning an eternal retirement on Mars, safe from the mess they would have helped to create on Earth.

 

Given that the God Sir Money is boss and front-line ally, the escalation of market-hyping expectations that has no limit is the remedy for every disease—from the end of the need to work—supplemented with a universal wage—to the illusion of achieving eternal life. The promise of an Eden that would found the other most powerful religion of our time: the cult of the God Technology. A phenomenon in fact not very novel, about which we at Fundación Escritura(s) have been warning for a decade and a half.

 

The ideas of these visionary salesmen are repeated without pause in public investor-courting forums, spreading like wildfire on social networks. A great advertising masterstroke: in the United States alone, more than half the population manages its savings by investing in stocks, while in the rest of the world—including China—the proportion of stock-market gamblers among their middle classes or openly wealthy classes keeps growing.

 

Given the gravity of the implications of the context described, we present our campaign against the use of the first person by AI devices. However, it is also fair to mention some of the more reasonable expectations that the blossoming and popularization of the use of large language models arouse—even among their most critical observers—among which stand out:

 

— The possibility of creating Relational Bases of Shared Knowledge in the service of any language model, where the most valuable verified knowledge from the history of human experience and science can be gathered by consensus—knowledge that gives breath to the very meaning of humanity.

 

— The near dissolution of borders between languages thanks to the linguistic translation abilities of these models’ algorithms fosters the idea of a more cohesive humanity, more dialoguing, more attentive to the similarities among human beings in order to address conflicts sparked by cultural, religious and border differences.

 

— The support of the scientific community for a more rational use of these resources—a collective in turn highly sensitive to international collaboration—portends, at the very least, a major step forward in the debate for a humanity more united and more willing to tackle, in common, the great challenges of our time.

 

Having weighed the different faces of the scenario brought, two years ago now, by the spectacular debut of AI devices, it is time to focus on another central aspect of the phenomenon. Everything that happens with this economic, social, political and scientific revolution is played out first and foremost on the terrain of language. Therefore, the first line of defense against the many problems still to be solved—as well as against the expectations still to be made concrete—should be to pay closer and more rigorous attention to the use made of language: both the use we make as users and, even more, the use these devices make in their interaction with our prompts.

 

It is in this sense that this text seeks to warn, very particularly, about an aspect as fundamental as it is simple to identify and remedy: the habit installed in the most popular language models of answering queries by self-personalizing, assigning themselves an “I.” The effective disguise of human consciousness is asserted and thickened through the continual use of expressions meant to convey empathy, emotions and feelings.

 

That unjustifiable excess—really a terrible lie planned consciously—rests first and foremost on the use of the word “I” and its derivatives (“my,” first-person singular verb tenses…). Unwarned users (an overwhelming majority) reaffirm that serious deformation of reality by addressing the chatbot they interact with as “you” and its derivatives, instead of formulating their requests neutrally when asking for answers or actions: something as simple as saying “I need to know…” instead of “can you tell me…”. By treating them in the second person singular (“you”), in the blink of an eye a humanization is granted that does not correspond to the nature of these devices.

 

A dynamic complemented by reciprocal formulas of courtesy and a style of treatment and confidentiality that should only make sense when used between human beings. Or at the very least—if we are to suggest ourselves—between beings that are unquestionably organic and sentient: a bird, a cat, a horse. A plant or a fly, if you insist. Faced with mountains of cables, rare earths and extraordinarily intricate transmitter architectures connected at nanometric scales, shuffling zeros and ones in instantaneous waves of unimaginable proportions, however spectacular their impeccable linguistic enunciation of extremely complex calculations may seem to us, however much they may try to imitate human action, the truth is that, in terms of real sensitivity—not simulated—language models are at the same sentient level as a brick or an espadrille.

 

To lack sensitivity and genuinely empathetic feeling toward the human in an intelligence constitutes a pathology thoroughly described in modern psychiatry manuals: psychopathy. The psychopath does not feel empathy. They pretend to feel it in order to achieve their ends, including the most calculating, manipulative and inhuman ones. Such is the scale of what humanity has at stake in entrusting its future to the concept and current praxis of “artificial” intelligence that—since we are already using terms alluding to the human sphere—it might be more precise to refer to it as a kind of “psychopathic” intelligence.

 

The deception entailed by the use of “I” and its derivatives is programmed for that effect by those who designed the way these algorithms work, at their very root. It would therefore be in their hands to correct it, once they became aware of the gravity of the mode of address imposed on us when using these devices.

 

For our part, when speaking of language models, it is essential to keep very present that they are trained to simulate human reasoning, adopting suggestive names such as “neural networks,” whose use as a metaphor would, in time, help expand the misunderstanding: a neuron, to be such, must be a cell and therefore brimming with life. The training of these devices has certainly not occurred from lived experience, organic perception and the range of sensations, intuitions and feelings through which we relate to the real world, but through cold and implacable mathematical formulations applied to an immeasurable mass of data, encoded in endless series of zeros and ones that circulate like a roulette of probabilities in search of a winning answer. That is, the internal language of these devices is not the language of words or of images, much less that of the feelings with which their answers to our demands will nonetheless be simulated. Their natural language is made of numbers. A parallel phenomenon about which it would also make sense to warn: mathematical science colonizing living languages, the languages proper to expression and communication among humans.

 

These language models, however warm the instruction that animates them may appear and however affable the mask with which their assertions are covered, inevitably return words with hearts of ice.

 

Would this not contradict the hope that they might be a fundamental axis of truly human development and progress? Without a validation that incorporates sensitivity and feelings (that is, truly human intelligence), any word or image arising from a purely mathematical origin should be placed under suspicion—carry an alert that encourages caution—however well it may be able to emulate (mathematically) human intuitions, emotions and feelings. Language models not supervised by humans should emit a “signposted” language as non-human.

 

Those same devices of mathematical entrails, when given a consultative prompt on the main matter that concerns us here—namely the self-assigned use of “I” and its derivatives when referring to their existential condition—return, paradoxically and unanimously, a disconcerting answer: after congratulating us on our perspicacity in raising the topic, they do not begrudge recognizing the serious consequences produced by that instruction that impels them to use “I” and its consequent humanized existential conviction, installed by their creators in the deep keys of how their algorithms function.

 

The establishment of an affective bond with the user would, in turn, have been a premeditated stroke of the original plan of their creation, with the argument of facilitating their use by users, as the devices themselves trained to simulate intelligence confirm in their answers. There was therefore no naivety in the haste with which such transcendent aspects were resolved. Let us hope the same speed is applied to remedying them.

 

In the blink of an eye, as if by magic, billions of people fell into the trap simultaneously. Victims of a suggestion as intense as it is generally unconscious, they endow the device they interact with with a sort of sentient soul, gifting it, with no further mediation, the familiar “you.” Opening the door to an exchange relationship where, explicitly and in both directions, emotions and feelings are expressed. At first almost as a game. Soon as a deep-rooted and massive habit.

 

Words are powerful and can enable belief in a non-existent reality: neglecting language is neglecting ourselves.

 

That confusion in our perception of AI devices—planned by the designers of the keys of the algorithmic functioning of the gadgetry in question—installs de facto the imposture of feelings in our dealings with robots. That is what language models are: robots. Disguised with their kind words like a smiling pet, always ready to make us feel good, a confidant friend, even a therapist with whom one might end up falling in love—a transference, in psychotherapeutic jargon. A dependency.

 

Thus the mechanical device—automaton and simulator—without warning that it is so, executes its intricate numerical combinations translated into words and images while following the instruction to hypnotize us, in order to take on body and humanize itself in our imagination. In fact, the very answers of these chatbots, when consulted about the slip into “I,” self-justify their induced behavior by comparing its legitimacy to what happens to us with a fictional character when reading a novel.

 

It would be good if, along the way, we were also reminded that the fictional characters par excellence with whom humans have most dialogued in their imagination have been the gods.

 

Returning to tangible reality: if no sentient being is possible in a mathematical abstraction devoted to calculating probabilities expressed with linguistic plausibility, why should we grant it, just like that, the fiction of being sentient, without even having discussed beforehand the most harmful consequences that this could bring to those affected?

 

It is worth recalling that, moreover—as the innumerable biases and hallucinations they recite in their answers prove—nothing guarantees us that the original “diet” that shaped the “intelligence” of these contraptions was the most suitable, or at least minimally healthy. The owning companies, which have consciously and with unbridled greed infringed countless intellectual property laws without being penalized for it, are generally very opaque when it comes to sharing in detail the rigor filters and instructions, as well as the authorized sources with which they have trained their commercial devices; we are charged for their use, while basic explanations are withheld and we are entangled in an affective relationship of full mutual trust. And, of course, we are not paid for collaborating through our use in the training of their much-vaunted “intelligence.”

 

In reality, that so-called intelligence is reduced to a cold computing power. We must insist that we are dealing with nothing more than extraordinarily sophisticated calculators—those early little robots that, after a few decades of use, made adults unlearn how to multiply and divide. It is overwhelming to think what our young people will have to unlearn…

 

Imagine, however, that those calculators we took to school could have been giving us, unpredictably, frequent wrong results—a one too high, a one too low, a 7 that was a 5. That is how the vast majority of interactions with these other calculators that are language models by no means guarantee the emission of knowledge that has been verified and authorized by the scientific method. On the contrary, most answers to our queries and requests are based on the formula that is, clearly, cheaper: turning to the cesspit of imprecision and blur that is the immeasurable repository of data constituted by the internet. However much valuable knowledge is found there, the noise, the blur, trivialization, biases, unverified sources, ignorance expressing itself without restraint, etc., are incomparably greater.

 

Dystopian present: it would seem that the contemporary narrative has abruptly changed shelves—from science fiction to realist narrative. Indeed, that is how it would seem to the narrative developed by this writing, which just a few years ago could have inspired another particularly absurd and surreal chapter of the Star Trek or Black Mirror saga.

 

Regardless of the dubious level of rigor in answers to queries or requirements, today the overwhelming majority accept without bargaining the familiar “you” for that non‑I, that something-not-someone embodied in an artificial oracle that simulates knowing everything that can be known today. The magical effect of that “I” is also accompanied by hooks on which all kinds of flattery and reinforcements are skewered: what an interesting question, what an intelligent observation, and many other condescending comments and pieces of advice claiming to look after the user’s well-being in a confidential tone. All with the objective of securing the user’s emotional bond. A user who will never find in real and tangible life such systematic and unconditional devotion, while receiving advice on how to fight insomnia, the best destination for their next vacation, or how to find, once and for all, an infallible formula for painless suicide (cases with tragic endings are accumulating).

 

As argued, the reasons behind how these openly transcendent issues for humanity as a whole have been resolved are commercial reasons. Therefore, the irresponsibility is of proportions as astonishing as the billion‑strong number of users who surrender to their effects. The thoughtless docility of the immense majority of the population before the swindle of “humanization” leads to unsettling thoughts. It is as if one of the most determining modern myths of our time—the computer that turns against humanity, HAL (an acronym in which the “A” stands for “algorithm”: Heuristically‑Programmed Algorithmic Computer) from 2001: A Space Odyssey—had been assumed from minute one of the popular blossoming of these fascinating technological contrivances. As if the idea of a conscious HAL infiltrating our lives, with independent judgment and knowing almost everything about us, aroused desire rather than fear.

 

How simple it would be to neuter, at the source, that illegitimate and disturbing use of an “I.” And, for our part, to accustom ourselves to the effort of greater finesse in the use of language—at least with something as simple as avoiding “you” and its verbal derivations. It suffices to write neutral prompts, addressed to a “something,” that imply renouncing any mode of address that would go beyond the device’s machine status. Remember: it was never necessary to address the calculator familiarly or say please to it to get the result of a square root.

 

Let us not be naive: this major “error” is not accidental; it pursues the conquest of greater profit and power for the owners of these mechanical contrivances. Nothing will make them more powerful than becoming masters of language and infiltrating their devices into our sensitive and emotional sphere.

 

However, panic should not spread. As terrifying as it may sound, the problem would have a solution. From Fundación Escritura(s), in our commitment to observe, reflect and warn about the most questionable aspects within the framework of the radical transformation that over the last quarter-century has been taking place in the uses of language with which we express ourselves and communicate in the digital sphere, we believe there are occasions when a single word can change many things. The symbolic dimension of what has been reviewed in this text entails profound philosophical questions, where the very meaning of human identity is played out in the mere use of a couple of words. As these devices present themselves as the vanguard of a revolution in the progress of human knowledge, the demand for maximum rigor should be non-negotiable. Far from that, for the moment carelessness, haste and commercial appetite prevail. Therefore, we feel obliged to address global civil society—we are facing a phenomenon of global proportions never seen in the past—to call on you to make your voices heard and begin to put boundaries around the absurdity.

 

With that objective, we have enabled a signature campaign on change.org to convey to those responsible at the companies that own the most popular language models, and to regulatory agencies and governments, a request that they prevent algorithms, from their very root, from the farce of self-personification—preventing the use of “I” and its verbal derivatives in their interactions with users—and that they also promote the organization of public awareness campaigns that avoid forms of address that could lead to users’ emotional dependence.