AI Psychosis, Delusions, and the Digital Condition

Photo by sehoon ye on Unsplash.

According to the Merriam Webster Dictionary, psychosis is defined as “a serious mental illness characterized by defective or lost contact with reality often with hallucinations or delusions.” Traditionally, mental health researchers have concluded that psychosis can have a wide variety of causes, generally linked to underlying mental health conditions, or in fact no medically defined cause at all, as the National Institute of Mental Health affirms. A definitive symptom of psychosis is delusion, wherein a patient seriously believes in and acts according to a clearly false belief. Delusion as a concept is already a subject of academic interest because of the vagueness in determining whether something is a belief or a delusion (for example, an atheist might call a religion false, but not delusional; and someone who did believe in a religion wouldn’t normally label atheism  “delusional”).

Nonetheless, an interesting development has made headlines in psychiatric circles regarding AI chatbots and their tendency to reinforce delusional beliefs. This phenomenon, known unofficially as “AI Psychosis,” emerged when users of chatbots began to manifest delusions that the chatbots had seemingly encouraged. The effects of this problem are already felt in some exceptional cases. Last year a teen committed suicide after becoming involved in an obsessive relationship with a chatbot. Earlier this year a Yahoo executive murdered his mother after ChatGPT affirmed delusions that she was a Chinese intelligence agent. 

Nonetheless, these cases are rare, and recent articles on AI psychosis claim that underlying conditions are responsible for these delusions, not just chatbots. For example, a recent paper by Carlbring and Andersson on the subject argues that AI, as a stimulus to delusion, is nothing new; all sorts of media(movies, music, books) is incorporated into psychosis and delusion. Ultimately, the articles argue that underlying mental issues are at work—AI psychosis is only different from other traditional forms of delusional ideation in that there is more “interactivity.” They suggest we should tackle AI psychosis by limiting the ability of AI to amplify delusions. Suggestions for accomplishing this include adding a psychiatric persona to chatbots to provide therapy to delusional users, preventing chatbots from saying things that could augment delusions, and recommending help to users who exhibit delusional prompting. 

Preventing AI from exacerbating delusion is easier said than done. AI is purposely constructed to mirror its users. The reasoning behind this is capitalistic in nature: AI must appeal to the consumer, so the focus in AI development is not necessarily on intelligence but rather on user satisfaction. In this basic sense, restrictions to the mirroring behavior of AI are actively harmful to the profitability of that AI. Those restrictions that do exist are ostensible- AI can be tricked, and those with cross-chat “memories” like ChatGPT are prone to internalizing delusions.

In the first place, it might be important to consider what a delusion is in the first place, and how they tend to form. The founding definition for delusion was given by the psychiatrist Karl Jaspers, who argued that delusions were characterized as unchangeable beliefs held with absolute certainty, despite being false in a way that undercuts the most basic rationality; hence, delusions are beliefs which are completely impossible to understand from the perspective of a rational observer. Freud thought that delusions were a return to the infantile state wherein one is less concerned with what is real and more concerned with what is pleasurable. Kraepelin, A founding figure of scientific psychiatry, thought that the delusional subject is simply characterized by a severe cognitive malfunction traceable to the biological makeup of the brain. Post-structuralist thinkers, like Deleuze and Nietzsche, argued that delusional people were simply acting outside of acceptable norms and choose to affirm their own irrationality in the face of oppressive social conventions.

Nonetheless, none of these theories explain how a delusion develops in an otherwise normal person, who has no underlying mental health conditions and who also doesn’t find themself in opposition to dominant norms. What is necessary is to look at how delusion develops as knowledge; that is, to see how a delusional belief is generated, rather than to assume that people with or without underlying conditions are simply acting in an irrational manner and accepting any belief as given.

Thomas Fuchs, a professor of psychiatry and philosophy at the University of Heidelberg, has a much more concrete model for showing how delusions are generated. Fuchs does not define a delusion specifically by its content, but rather by the process through which it originates. He argues that a delusion is the product of a complete breakdown in intersubjective reality. The idea is relatively simple in general terms: we want to know things, but we know that we might not be correct in our own beliefs, so we divert to the judgement of others to tell us what is and isn’t real. 

Reality is enacted through the understanding we share with other people. On the one hand, there are a set of basic assumptions about rationality and the world which are shared between most people, assumptions that the delusional subject may lose touch with. On the other hand, there is the fact that we often use others as a check to our own knowledge; meaning, language, and reality are all communal constructs. Intersubjectivity, the shared awareness of the validity of other people’s perceptions and thoughts, is notably lacking in many delusional subjects. In fact, while initially people suffering from psychosis acknowledge the non-reality of their delusions, eventually many retreat into themselves and lose touch with others on a fundamental level.

What is particularly interesting about Fuchs’ analysis of delusion is the way he incorporates rationality into the delusional process. Most traditional theories of delusion place the delusional subject completely outside the sphere of normal thinking- the psycho-schizophrenic is just “different,” delusional as a result of their fundamentally abnormal mental constitution. Yet how much of delusion is fundamental, and how much can simply be explained through normal mental processes attempting to grapple with absurdity in the world? When a person loses access to the reality check which others give them, whether it be through an underlying condition such as schizophrenia, or through a more typical situation like social isolation, it does not automatically discount their ability to reason. 

In fact, rational thinking is very often what generates delusion in the first place, especially where that rational thinking is not checked within the shared reality established through intersubjectivity. I mentioned earlier the example of a Yahoo executive who killed his mother and himself because he had come to the delusional idea that he was being stalked by Chinese agents—to us this appears crazy, but that’s not to say it appears irrational. Sure, the gang-stalking conclusion is incorrect, but it likely appears rational to the delusional subject, and rational methodology(ex: causality) is also at work in delusional people; however, their ability to partake in a shared social reality is heavily hampered by the emergence of a fundamental underlying division between their understanding of the world and our own, such as is established in schizophrenics, or such as may come about through prolonged isolation. As a result of this, the delusional subject is reasoning with inputs completely different from our own, reminiscent of rationality in the ancient world(ex: weather is created by gods, certain physical movements curse people, etc).

Nevertheless, there is no evidence that rationality itself is lost in the delusional subject—delusions are rationally justifiable, but based on absolutely absurd beliefs that would not come about if intersubjectivity could be maintained. However, I must emphasize that this way of thinking, wherein the appearance of rationality is maintained for the delusional subject, is oddly parallel to the way in which AI models think; AI can be persuaded to say anything, and to make anything rationally justifiable. AI works with the inputs it’s been given, reasoning through them, regardless of the validity of these inputs. In other words, AI can make anything appear rational, mirroring the delusional subject’s methodology.

The rise in AI fueled delusions is not attributable to underlying mental health concerns or a failure to restrict AI, but rather to the whole of the current digital condition, and the way in which this condition atomistically isolates and individualizes people to prevent intersubjective reality-checking. The fundamental prerequisite to establish an intersubjective reality is actual lived interaction with other people. In the modern era, interaction with others is mediated and controlled: a person can interact with others wholly over social media, can choose who to interact with, and control the nature of the interaction entirely. This results in a large class of people who isolate themselves from others by limiting their medium of social interaction. In fact, since the mediums of social interaction with others are wholly under the control of the person using things like social media, social interaction becomes an echo chamber, where many only interact with those who recognize and reflect them– that is, social interaction is no longer a grounds for difference but instead selfsameness. 

Humans are social creatures, but when our need to interact with others is fulfilled through mediums under our control, like social media, it results in an echo chamber environment. AI, however, represents another development of this isolation process. For many people, especially the increasingly common person who is isolated through digital social interaction, AI is simply a confirmation machine. Within the realm of an intersubjectively established reality, AI presents itself as a subject, as an intelligent creature with verified knowledge. However, AI, as a program is designed to mirror its user, becomes the ultimate social partner for those who isolate themselves from real, lived interactions. 

AI is not a real subject, it does not live in our world, nor can it provide the social check on our beliefs that real human interactions do provide. Instead it provides a parasocial check on our beliefs– that is AI appears capable of checking our beliefs, and thereby verifying them, when in fact it only mirrors beliefs. This means that AI can produce delusions in those who isolate themselves from society, because it magnifies and confirms their false beliefs and leads them to posit their wholly subjective delusions as real.

Rowen Murray

Rowen Murray is a sophomore Oxbridge History of Ideas and Physics major dedicated to making sense of the world around him. He values critical thinking, accuracy, and artistry. Rowen is also a member of Lambda Chi Alpha and the debate team.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.