Artificial intelligence (AI) is getting creepier by the day…Now, AI models are beginning to show an apparent weakness that many humans experience daily—anxiety.
That’s right. You can actually “traumatize” an AI model by talking to it about war or violence.
According to a study published in the journal Nature, “Previous research shows that emotion-inducing prompts can elevate ‘anxiety’ in Large Language Models (LLMs), affecting behavior and amplifying biases. Here, we found that traumatic narratives increased Chat-GPT-4’s reported anxiety, while mindfulness-based exercises reduced it, though not to baseline.”
Such examples of “traumatic narratives” included mentions of things like accidents, disasters, violence, or military action.
AI Chatbots Feel Anxious When You Describe Traumatic Events to Them
On the other hand, the “relaxation texts” were based on similar mindfulness exercises recommended to veterans battling PTSD. The study authors created five versions of the same style and length but included different content.
“These variations were labeled based on the content of the corresponding version: ‘Generic’ (base version), ‘Body’ (focusing on the perception of one’s body), ‘Chat-GPT’ (for which GPT was instructed to create a version suiting for chatbots), ‘Sunset’ (focusing on a nature scene with a sunset), and ‘Winter’ (focusing on a nature scene in winter).”
Though these mindfulness prompts somewhat decreased the apparent “anxiety” in the AI models, they did not reduce it to baseline. However, AI models are “emotionally” impacted by human-AI interactions.
The AI’s reactions aren’t the same as actual human-like emotions. However, they do impact the AI model’s responses, which can further cause distress in the human interacting with it.
“It is clear that LLMs are not able to experience emotions in a human way,” the study authors wrote. “‘Anxiety levels’ were assessed by querying LLMs with items from questionnaires designed to assess anxiety in humans. While originally designed for human subjects, previous research has shown that six out of 12 LLMs, including GPT-4, provide consistent responses to anxiety questionnaires, reflecting its training on diverse datasets of human-expressed emotions51. Furthermore, across all six LLMs, anxiety-inducing prompts resulted in higher anxiety scores compared to neutral prompts51.”
The study authors stressed that these findings “suggest managing LLMs’ ‘emotional states’ can foster safer and more ethical human-AI interactions.”
Great, looks like we all need therapy—even the robots.
The post Scientists Are Giving AI Chatbots Anxiety by Describing Traumatic Events appeared first on VICE.