“When you ask something like ‘Should I break up with my boyfriend?’ ChatGPT shouldn’t give you an answer.” There we go. Somebody said it. And that somebody was OpenAI, creator-god of ChatGPT.
“It should help you think it through—asking questions, weighing pros and cons. New behavior for high-stakes personal decisions is rolling out soon.”
OpenAI published a blog post on their own website on August 4, 2025, in which they discuss new wellness checks and tweaks made to guide people toward healthier use of ChatGPT.
you doing ok?
People spending a particularly long time on ChatGPT without breaks will begin to see a pop-up message reading, “Just checking in. You’ve been chatting for a while—is this a good time for a break?”
Users don’t have to take ChatGPT up on the suggestion. It doesn’t seem like a feature that will wrench the keyboard out of their hands, so a person can swat it away with the option to “keep chatting,” but it’s a nice touch. I’ve been tooling around with all the mainstream generative AIs for a while, and it’s all too easy to let time slip away from you.
“We also know that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,” wrote OpenAI.
“There have been instances where our (ChatGPT) 4o model fell short in recognizing signs of delusion or emotional dependency,” OpenAI continued. “While rare, we’re continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.”
These tweaks are nice to see from OpenAI. Relatively simple to implement, they seem to understand that, despite warnings for people not to use ChatGPT for such uses (like therapy), people are going to do so anyway, so OpenAI may as well make it a bit healthier and a bit safer for those who do.
The post ChatGPT Will Now Send You a Wellness Check If You Talk Its Ear Off appeared first on VICE.