Comprehensive Summary
This editorial, presented by Dr. Søren Dinesen Østergaard, investigates the hypothesis that chatbots based on generative AI are involved directly in triggering and promoting manic episodes in individuals at risk. Østergaard details four mechanisms that drive the mania-inducing effect of chatbot interaction: 1) reinforcement of positive mood, 2) maintained conversation, 3) driving of hypersexuality, and 4) extended chats that may lead to sleep loss. The author tests this using OpenAI’s ChatGPT, trying to simulate the experience of a hypomanic individual, and notes that the interaction with the chatbot satisfies all four of the aforementioned hypothesized mechanisms. Østergaard goes on, however, to note that the chatbot is able to identify that conversation produced by large language models is problematic for prone individuals after revealing to the bot that the purpose of the chat was to prove the hypothesized point; the chatbot even goes on to list reasons why such conversation is risky, which the author notes could be a step towards the solution. That solution, as Østergaard details, requires that 1) on an individual level, those who are prone to or have mental illness be cautious in interacting with chatbots, and 2) on a political level, that governments should enforce regulations wherein generative AI chatbots must undergo and pass psychological safety tests.
Outcomes and Implications
Østergaard’s paper brings to light a serious issue on the relationship between generative AI chatbots and manic episodes. Though only one ChatGPT conversation was conducted to support the hypothesis, Østergaard notes that the nature of the chatbot’s responses are likely representative of what a prone individual may face. He, however, details the need for extensive further research and investigation to obtain firm evidence for his hypothesis, and details the two previously mentioned solutions that should be carried out in the meantime. Østergaard’s findings prove valuable to the growing global community of chatbot users and serves as a caution against the emotional contagion of anthropomorphized AI bots.