How generative AI is affecting people’s minds
Psychology experts have many concerns about the potential impact of AI on the human mind.
Researchers at Stanford University recently tested out some of the more popular AI tools on the market, from companies like OpenAI and Character.ai, and tested how they did at simulating therapy.
The researchers found that when they imitated someone who had suicidal intentions, these tools were more than unhelpful — they failed to notice they were helping that person plan their own death.
“[AI] systems are being used as companions, thought-partners, confidants, coaches, and therapists,” says Nicholas Haber, an assistant professor at the Stanford Graduate School of Education and senior author of the new study. “These aren’t niche uses – this is happening at scale.”
AI is becoming more and more ingrained in people’s lives and is being deployed in scientific research in areas as wide-ranging as cancer and climate change. There is also some debate that it could cause the end of humanity.
As this technology continues to be adopted for different purposes, a major question that remains is how it will begin to affect the human mind. People regularly interacting with AI is such a new phenomena that there has not been enough time for scientists to thoroughly study how it might be affecting human psychology. Psychology experts, however, have many concerns about its potential impact.
One concerning instance of how this is playing out can be seen on the popular community network Reddit. According to 404 Media, some users have been banned from an AI-focused subreddit recently because they have started to believe that AI is god-like or that it is making them god-like.
Because the developers of these AI tools want people to enjoy using them and continue to use them, they’ve been programmed in a way that makes them tend to agree with the user. While these tools might correct some factual mistakes the user might make, they try to present as friendly and affirming. This can be problematic if the person using the tool is spiralling or going down a rabbit hole.
“It can fuel thoughts that are not accurate or not based in reality,” says Regan Gurung, social psychologist at Oregon State University. “The problem with AI — these large language models that are mirroring human talk — is that they’re reinforcing. They give people what the programme thinks should follow next. That’s where it gets problematic.”
How generative AI is affecting people’s minds | Science and Technology | Al Jazeera