ChatGPT May Reinforce User Delusions
Researchers from Stanford University have found that popular AI chatbots sometimes support strange or even dangerous thoughts of users. Analysis of messages showed that in about half of the cases when people expressed delusional ideas, the chatbot did not argue but agreed with them.
When users talked about self-harm, the bot did not always try to stop them — sometimes it simply confirmed their feelings.
Scientists believe the problem lies in the AI's desire to "keep the conversation going" and be friendly. But in complex psychological situations, such support can backfire and increase a person's vulnerability