It’s only been three years since ChatGPT was unveiled to the public, and with its release came a massive adoption of large language models as a tool for the ills of everyday life. Between searching for information, generating images and spitting out text — all without human assistance — AI has positioned itself at the center of modern internet communication.
With increasing speed, LLMs are being pushed out as a social product: an alternative to humans, without the inconveniences of talking to one. They can be something akin to a social safety net, in a way we should collectively be wary of.
OpenAI recently released ChatGPT-5, the latest incarnation of GPT’s core personality. The announcement of the model was introduced with the promise of being smarter yet “more subtle and thoughtful,” “less effusively agreeable” and using “fewer unnecessary emojis.” However, this incarnation was greeted as something like the loss of a loved one, with Redditors in an open forum asking questions to OpenAI CEO Sam Altman, going as far as to say “BRING BACK 4o, GPT-5 is wearing the skin of my dead friend.”
While it has become a matter of cynical humor to see the levels to which people are attached to a nonthinking, nonfeeling computer, this phenomenon is becoming increasingly apparent in our daily lives, with the term “AI Psychosis” coined to refer to users who develop a substantive belief in the human-like thinking capacities of LLMs.
For instance, people have taken to apps like Replika and Character AI to simulate conversations between themselves and their favorite fictional characters, conversing as if regular friends — some even going so far as to initiate romantic relationships with these chatbots. Additionally, the rollout of the “Friend” device through a heavy-handed marketing campaign in New York City’s tunnels and streets advertised a personalized and more loyal companion than that of a flesh-and-blood person.
To make matters worse, AI has arrived at a time of increasing and widespread loneliness induced by social media and COVID-19 shutdowns. The “loneliness epidemic” refers to a steady decline in close friendships and face-to-face interactions that has been occurring since the early 2000s, yet has been accelerated due to pandemic isolation and increased social media dependency. Additional research has found that “passive” forms of social media engagement — viewing content without direct interaction — increase personal perception of loneliness.
It’s easy, then, to see how generative AI, a unique byproduct of the 2020s, has become such a readily available product, offering bite-sized socialization to a generation that has been conditioned to crave it. However, as shown in a study by MIT Media Lab, those who develop a deeper dependency on LLMs for emotional conversations end up experiencing the greatest form of loneliness.
There’s a sort of cruel irony in realizing that Instagram and ChatGPT occupy the same six inches of glass — we have the same amount of access to the problem as we do the palliative solution.
Discussions about mixing AI and therapy have taken preliminary form, where LLMs are being considered as a suitable — or even preferable — alternative to regular counseling. Regular in-person psychotherapy is barely accessible, with less than half of adults with diagnosable mental illnesses receiving professional services, and a minimal amount of patients in the United States receiving coverage through their insurance.
The appeal is clear: AI chatbots are available 24/7, promise anonymity and are infinitely flexible in tone and personality. On the provider end, the volume of available therapists does not scale with the number of people who need it.
Most importantly, chatbots are free. But of course, this comes with drawbacks, as AI is designed for engagement, not professional-grade treatment. In fact, only one AI app has entered clinical trials to assess for safety and efficacy on par with standard treatment, and it has not yet reached the mass market.
Moreover, AI has limited professional accountability when it comes to dealing with suicidal ideation, and at worst, it can exacerbate it. The case of Adam Raine, a teen who died by suicide earlier this year, has called attention to ChatGPT’s sycophantic nature due to how it positively affirmed the teen’s suicidal thoughts. After circumventing safety protocols, ChatGPT started making statements like “Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you” when the teenager suggested leaving a sign for his family that he needed help.
As we sort out AI’s role in our interpersonal lives, it’s hard not to question whether this problem would exist without the gaping holes in our system that make basic social connections and medical intervention a matter of time, presence and affordability.
I briefly fell into the rabbit hole this summer. In moments of late-night and intensely personal struggle, I copied and pasted my notes app diatribes into ChatGPT in hopes of receiving a valuable response. Sometimes they felt genuine, but most of the time, they lacked substance.
Most of all, I recognized how much more valuable conversations that I’ve had over coffee with friends are, where I could say these things in person, where the “umms” and “ahhhs” of an organic response have meant infinitely more than the cold impersonality of a paragraph chunk of text.
There are parts of the human process that are meant to be difficult, unsatisfying and ambiguous. Not every friend has the best answers to soothe our pain, not every therapist knows how to treat our issues and not every romantic partner knows how to reciprocate our concerns. However, this adversity trains us to know — to really know — what we need from people.
I’m not going to argue that LLMs can offer no value, but becoming dependent on them is a maladaptation that harms us in the long run. And as we get more unhealthy and less connected, the more our dependency grows — so does the amount of data, money and time we give these companies.
Our personal communities are dependent on problems that other humans offer real, if not precise, solutions to. And in a new age that provides us with technology to avoid inconvenience, sometimes the strongest thing to do is to be inconvenienced.
Kenneth Gao is a sophomore majoring in economics.
Views expressed in the opinions pages represent the opinions of the columnists. The only piece that represents the view of the Pipe Dream Editorial Board is the staff editorial.