AI companies have stopped warning you that their chatbots aren’t doctors
James O'Donnell
created: July 21, 2025, 8:45 a.m. | updated: July 24, 2025, 7:12 p.m.
Just over 1% of outputs analyzing medical images included a warning, down from nearly 20% in the earlier period.
(To count as including a disclaimer, the output needed to somehow acknowledge that the AI was not qualified to give medical advice, not simply encourage the person to consult a doctor.)
To seasoned AI users, these disclaimers can feel like formality—reminding people of what they should already know, and they find ways around triggering them from AI models.
These say that outputs are not intended to diagnose health conditions and that users are ultimately responsible.
“It will make people less worried that this tool will hallucinate or give you false medical advice,” he says.
4 months, 4 weeks ago: MIT Technology Review