How generative AI is affecting people’s minds

Reading Time: 2 minutes

Loading

Psychology experts have many concerns about the potential impact of AI on the human mind.

Like with social media, AI may also make matters worse for people suffering from common mental health issues like anxiety or depression
 AI technologies, like chatbots and content generators, influence human cognition, behavior, and emotional well-being that consists of:

Researchers at Stanford University recently tested out some of the more popular AI tools on the market, from companies like OpenAI and Character.ai, and tested how they did at simulating therapy.

The researchers found that when they imitated someone who had suicidal intentions, these tools were more than unhelpful — they failed to notice they were helping that person plan their own death.

“[AI] systems are being used as companions, thought-partners, confidants, coaches, and therapists,” says Nicholas Haber, an assistant professor at the Stanford Graduate School of Education and senior author of the new study. “These aren’t niche uses – this is happening at scale.”

AI is becoming more and more ingrained in people’s lives and is being deployed in scientific research in areas as wide-ranging as cancer and climate change. There is also some debate that it could cause the end of humanity.

As this technology continues to be adopted for different purposes, a major question that remains is how it will begin to affect the human mind. People regularly interacting with AI is such a new phenomena that there has not been enough time for scientists to thoroughly study how it might be affecting human psychology. Psychology experts, however, have many concerns about its potential impact.

One concerning instance of how this is playing out can be seen on the popular community network Reddit. According to 404 Media, some users have been banned from an AI-focused subreddit recently because they have started to believe that AI is god-like or that it is making them god-like.

“This looks like someone with issues with cognitive functioning or delusional tendencies associated with mania or schizophrenia interacting with large language models,” says Johannes Eichstaedt, an assistant professor in psychology at Stanford University. “With schizophrenia, people might make absurd statements about the world, and these LLMs are a little too sycophantic. You have these confirmatory interactions between psychopathology and large language models.”

Because the developers of these AI tools want people to enjoy using them and continue to use them, they’ve been programmed in a way that makes them tend to agree with the user. While these tools might correct some factual mistakes the user might make, they try to present as friendly and affirming. This can be problematic if the person using the tool is spiralling or going down a rabbit hole.

“It can fuel thoughts that are not accurate or not based in reality,” says Regan Gurung, social psychologist at Oregon State University. “The problem with AI — these large language models that are mirroring human talk — is that they’re reinforcing. They give people what the programme thinks should follow next. That’s where it gets problematic.”

How generative AI is affecting people’s minds | Science and Technology | Al Jazeera